Task 01 – Familiarizing with the equipments & preparing an action plan.
Task 02 – Prepare the work area.
Task 03 – Fixed the hardware equipments and assemble three PCs.
Task 04 – Install NICs for each and every PC.
Task 05 – Cabling three computers and configure the peer to peer network with using hub or switch.
Task 06 – Install Windows operating system to each and every PC.
Task 07 – Install and configure the printer on one of the PCs.
Task 08 – Share printer with other PCs in the LAN.
Task 09 – Establish one shared folder
Task 10 – Create a test document on one of the PCs and copy the files to each of the other PCs in network.
Task 11 – Test the printer by getting the test document from each of the networked PCs.
Task No. Time allocation
Task 01 1 hour
Task 02 30 minutes
Task 03 1 A½ hour
Task 04 1 A½ hour
Task 05 1 A½ hour
Task 06 3 hour
Task 07 15 minutes
Task 08 15 minutes
Task 09 15 minutes
Task 10 10 minutes
Task 11 05 minutes
Total time allocation – 10 hours
In peer to peer network there are no dedicated servers or hierarchy among the computers. The user must take the decisions about who access this network.
In 1945, the idea of the first computer with a processing unit capable of performing different tasks was published by John von Neumann. The computer was called the EDVAC and was finished in 1949. These first primitive computer processors, such as the EDVAC and the Harvard Mark I, were incredibly bulky and large. Hundreds of CPUs were built into the machine to perform the computers tasks.
Starting in the 1950s, the transistor was introduced for the CPU. This was a vital improvement because they helped remove much of the bulky material and wiring and allowed for more intricate and reliable CPU’s. The 1960s and 1970s brought about the advent of microprocessors. These were very small, as the length would usually be recorded in nanometers, and were much more powerful. Microprocessors helped this technology become much more available to the public due to their size and affordability. Eventually, companies like Intel and IBM helped alter microprocessor technology into what we see today. The computer processor has evolved from a big bulky contraption to a minuscule chip.
Computer processors are responsible for four basic operations. Their first job is to fetch the information from a memory source. Subsequently, the CPU is to decode the information to make it usable for the device in question. The third step is the execution of the information, which is when the CPU acts upon the information it has received. The fourth and final step is the write back. In this step, the CPU makes a report of the activity and stores it in a log.
Two companies are responsible for a vast majority of CPUs sold all around the world. Intel Corporation is the largest CPU manufacturer in the world and is the maker of a majority of the CPUs found in personal computers. Advanced Micro Devices, Inc., known as AMD, has in recent years been the main competitor for Intel in the CPU industry.
The CPU has greatly helped the world progress into the digital age. It has allowed a number of computers and other machines to be produced that are very important and essential to our global society. For example, many of the medical advances made today are a direct result of the ability of computer processors. As CPUs improve, the devices they are used in will also improve and their significance will become even greater.
The term Video Graphics Array (VGA) refers specifically to the display hardware first introduced with the IBM PS/2 line of computers in 1987, but through its widespread adoption has also come to mean either an analogue computer display standard, the 15-pin D-sub miniature VGA connector or the 640A—480 resolution itself. While this resolution has been superseded in the personal computer market, it is becoming a popular resolution on mobile devices.
Video Graphics Array (VGA) was the last graphical standard introduced by IBM that the majority of PC clone manufacturers conformed to, making it today (as of 2009) the lowest common denominator that all PC graphics hardware supports, before a device-specific driver is loaded into the computer. For example, the MS-Windows splash screen appears while the machine is still operating in VGA mode, which is the reason that this screen always appears in reduced resolution and colour depth.
VGA was officially superseded by IBM’s XGA standard, but in reality it was superseded by numerous slightly different extensions to VGA made by clone manufacturers that came to be known collectively as “Super VGA”.
VGA is referred to as an “array” instead of an “adapter” because it was implemented from the start as a single chip (an ASIC), replacing the Motorola 6845 and dozens of discrete logic chips that covered the full-length ISA boards of the MDA, CGA, and EGA. Its single-chip implementation also allowed the VGA to be placed directly on a PC’s motherboard with a minimum of difficulty (it only required video memory, timing crystals and an external RAMDAC), and the first IBM PS/2 models were equipped with VGA on the motherboard.
Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.
By contrast, storage devices such as tapes, magnetic discs and optical discs rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than data transfer, and the retrieval time varies based on the physical location of the next item. The word RAM is often associated with volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. Many other types of memory are RAM, too, including most types of ROM and flash memory called NOR-Flash.
An early type of widespread writable random-access memory was the magnetic core memory, developed from 1949 to 1952, and subsequently used in most computers up until the development of the static and dynamic integrated RAM circuits in the late 1960s and early 1970s. Before this, computers used relays, delay line memory, or various kinds of vacuum tube arrangements to implement “main” memory functions (i.e., hundreds or thousands of bits); some of which were random access, some not. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers and random-access register banks. Modern types of writable RAM generally store a bit of data in either the state of a flip-flop, as in SRAM (static RAM), or as a charge in a capacitor (or transistor gate), as in DRAM (dynamic RAM), EPROM, EEPROM and Flash. Some types have circuitry to detect and/or correct random faults called memory errors in the stored data, using parity bits or error correction codes. RAM of the read-only type, ROM, instead uses a metal mask to permanently enable/disable selected transistors, instead of storing a charge in them.
As both SRAM and DRAM are volatile, other forms of computer storage, such as disks and magnetic tapes, have been used as persistent storage in traditional computers. Many newer products instead rely on flash memory to maintain data when not in use, such as PDAs or small music players. Certain personal computers, such as many rugged computers and net books, have also replaced magnetic disks with flash drives. With flash memory, only the NOR type is capable of true random access, allowing direct code execution, and is therefore often used instead of ROM; the lower cost NAND type is commonly used for bulk storage in memory cards and solid-state drives.
Similar to a microprocessor, a memory chip is an integrated circuit (IC) made of millions of transistors and capacitors. In the most common form of computer memory, dynamic random access memory (DRAM), a transistor and a capacitor are paired to create a memory cell, which represents a single bit of data. The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state.
Top L-R, DDR2 with heat-spreader, DDR2 without heat-spreader, Laptop DDR2, DDR, Laptop DDR
1 Megabit chip – one of the last models developed by VEB Carl Zeiss Jena in 1989
Many computer systems have a memory hierarchy consisting of CPU registers, on-die SRAM caches, external caches, DRAM, paging systems, and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as “RAM” by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that rotating storage media or a tape is variable. The overall goal of using a memory hierarchy is to obtain the higher possible average access performance while minimizing the total cost of entire memory system. (Generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom.)
In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or too small for current purposes. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system.
A hard disk drive (often shortened as hard disk, hard drive, or HDD) is a non-volatile storage device that stores digitally encoded data on rapidly rotating platters with magnetic surfaces. Strictly speaking, “drive” refers to the motorized mechanical aspect that is distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.
HDDs (introduced in 1956 as data storage for an IBM accounting computer) were originally developed for use with general purpose computers. During the 1990s, the need for large-scale, reliable storage, independent of a particular device, led to the introduction of embedded systems such as RAIDs, network attached storage (NAS) systems, and storage area network (SAN) systems that provide efficient and reliable access to large volumes of data. In the 21st century, HDD usage expanded into consumer applications such as camcorders, cell phones (e.g. the Nokia N91), digital audio players, digital video players, digital video recorders, personal digital assistants and video game consoles.
HDDs record data by magnetizing ferromagnetic material directionally, to represent either a 0 or a 1 binary digit. They read the data back by detecting the magnetization of the material. A typical HDD design consists of a spindle that holds one or more flat circular disks called platters, onto which the data are recorded. The platters are made from a non-magnetic material, usually aluminium alloy or glass, and are coated with a thin layer of magnetic material, typically 10-20 nm in thickness with an outer layer of carbon for protection. Older disks used iron (III) oxide as the magnetic material, but current disks use a cobalt-based alloy.
The platters are spun at very high speeds. Information is written to a platter as it rotates past devices called read-and-write heads that operate very close (tens of nanometres in new drives) over the magnetic surface. The read-and-write head is used to detect and modify the magnetization of the material immediately under it. There is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or in some older designs a stepper motor.
The magnetic surface of each platter is conceptually divided into many small sub-micrometre-sized magnetic regions, each of which is used to encode a single binary unit of information. Initially the regions were oriented horizontally, but beginning about 2005, the orientation was changed to perpendicular. Due to the polycrystalline nature of the magnetic material each of these magnetic regions is composed of a few hundred magnetic grains. Magnetic grains are typically 10 nm in size and each form a single magnetic domain. Each magnetic region in total forms a magnetic dipole which generates a highly localized magnetic field nearby. A write head magnetizes a region by generating a strong local magnetic field. Early HDDs used an electromagnet both to magnetize the region and to then read its magnetic field by using electromagnetic induction. Later versions of inductive heads included metal in Gap (MIG) heads and thin film heads. As data density increased, read heads using magnetoresistance (MR) came into use; the electrical resistance of the head changed according to the strength of the magnetism from the platter. Later development made use of spintronics; in these heads, the magnetoresistive effect was much greater than in earlier types, and was dubbed “giant” magnetoresistance (GMR). In today’s heads, the read and write elements are separate, but in close proximity, on the head portion of an actuator arm. The read element is typically magneto-resistive while the write element is typically thin-film inductive.
HD heads are kept from contacting the platter surface by the air that is extremely close to the platter; that air moves at, or close to, the platter speed. The record and playback head are mounted on a block called a slider, and the surface next to the platter is shaped to keep it just barely out of contact. It’s a type of air bearing.
In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects. To counter this, the platters are coated with two parallel magnetic layers, separated by a 3-atom-thick layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording, first shipped in 2005, as of 2007 the technology was used in many HDDs.
The grain boundaries turn out to be very important in HDD design. The reason is that, the grains are very small and close to each other, so the coupling between adjacent grains is very strong. When one grain is magnetized, the adjacent grains tend to be aligned parallel to it or demagnetized. Then both the stability of the data and signal-to-noise ratio will be sabotaged. A clear grain perpendicular boundary can weaken the coupling of the grains and subsequently increase the signal-to-noise ratio. In longitudinal recording, the single-domain grains have uniaxial anisotropy with easy axes lying in the film plane. The consequence of this arrangement is that adjacent magnets repel each other. Therefore the magnetostatic energy is so large that it is difficult to increase areal density. Perpendicular recording media, on the other hand, has the easy axis of the grains oriented to the disk plane. Adjacent magnets attract to each other and magnetostatic energy are much lower. So, much higher areal density can be achieved in perpendicular recording. Another unique feature in perpendicular recording is that a soft magnetic underlayer are incorporated into the recording disk.This underlayer is used to conduct writing magnetic flux so that the writing is more efficient. This will be discussed in writing process. Therefore, a higher anisotropy medium film, such as L10-FePt and rare-earth magnets, can be used.
Opened hard drive with top magnet removed, showing copper head actuator coil (top right).
A hard disk drive with the platters and motor hub removed showing the copper colored stator coils surrounding a bearing at the center of the spindle motor. The orange stripe along the side of the arm is a thin printed-circuit cable. The spindle bearing is in the center.
A typical hard drive has two electric motors, one to spin the disks and one to position the read/write head assembly. The disk motor has an external rotor attached to the platters; the stator windings are fixed in place. The actuator has a read-write head under the tip of its very end (near center); a thin printed-circuit cable connects the read-write head to the hub of the actuator. A flexible, somewhat ‘U’-shaped, ribbon cable, seen edge-on below and to the left of the actuator arm in the first image and more clearly in the second, continues the connection from the head to the controller board on the opposite side.
The head support arm is very light, but also rigid; in modern drives, acceleration at the head reaches 250 Gs.
The silver-colored structure at the upper left of the first image is the top plate of the permanent-magnet and moving coil motor that swings the heads to the desired position (it is shown removed in the second image). The plate supports a thin neodymium-iron-boron (NIB) high-flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives only have one magnet).
The voice coil, itself, is shaped rather like an arrowhead, and made of doubly-coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after its wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point to the actuator bearing center) interact with the magnetic field, developing a tangential force that rotates the actuator. Current flowing racially outward along one side of the arrowhead and racially inward on the other produces the tangential force. (See magnetic field Force on a charged particle.) If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore the surface of the magnet is half N pole, half S pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head.
A floppy disk is a data storage medium that is composed of a disk of thin, flexible (“floppy”) magnetic storage medium encased in a square or rectangular plastic shell. Floppy disks are read and written by a floppy disk drive or FDD, the initials of which should not be confused with “fixed disk drive,” which is another term for a (non removable) type of hard disk drive. Invented by IBM, floppy disks in 8-inch (200mm), 5A¼-inch (133.35mm), and 3A½-inch (90mm) formats enjoyed many years as a popular and ubiquitous form of data storage and exchange, from the mid-1970s to the late 1990s. While floppy disk drives still have some limited uses, especially with legacy industrial computer equipment, they have now been largely superseded by USB flash drives, external hard drives, CDs, DVDs, and memory cards (such as Secure Digital).
5A¼-inch disk had a large circular hole in the center for the spindle of the drive and a small oval aperture in both sides of the plastic to allow the heads of the drive to read and write the data. The magnetic medium could be spun by rotating it from the middle hole. A small notch on the right hand side of the disk would identify whether the disk was read-only or writable, detected by a mechanical switch or photo transistor above it. Another LED/phototransistor pair located near the center of the disk could detect a small hole once per rotation, called the index hole, in the magnetic disk. It was used to detect the start of each track, and whether or not the disk rotated at the correct speed; some operating systems, such as Apple DOS, did not use index sync, and often the drives designed for such systems lacked the index hole sensor. Disks of this type were said to be soft sector disks. Very early 8-inch and 5A¼-inch disks also had physical holes for each sector, and were termed hard sector disks. Inside the disk were two layers of fabric designed to reduce friction between the medium and the outer casing, with the medium sandwiched in the middle. The outer casing was usually a one-part sheet, folded double with flaps glued or spot-welded together. A catch was lowered into position in front of the drive to prevent the disk from emerging, as well as to raise or lower the spindle (and, in two-sided drives, the upper read/write head).
The 8-inch disk was very similar in structure to the 5A¼-inch disk, with the exception that the read-only logic was in reverse: the slot on the side had to be taped over to allow writing.
The 3A½-inch disk is made of two pieces of rigid plastic, with the fabric-medium-fabric sandwich in the middle to remove dust and dirt. The front has only a label and a small aperture for reading and writing data, protected by a spring-loaded metal or plastic cover, which is pushed back on entry into the drive.
Newer 5A¼-inch drives and all 3A½-inch drives automatically engages when the user inserts a disk, and disengages and ejects with the press of the eject button. On Apple Macintosh computers with built-in floppy drives, the disk is ejected by a motor (similar to a VCR) instead of manually; there is no eject button. The disk’s desktop icon is dragged onto the Trash icon to eject a disk.
The reverse has a similar covered aperture, as well as a hole to allow the spindle to connect into a metal plate glued to the medium. Two holes bottom left and right, indicate the write-protect status and high-density disk correspondingly, a hole meaning protected or high density, and a covered gap meaning write-enabled or low density. A notch top right ensures that the disk is inserted correctly, and an arrow top left indicates the direction of insertion. The drive usually has a button that, when pressed, will spring the disk out at varying degrees of force. Some would barely make it out of the disk drive; others would shoot out at a fairly high speed. In a majority of drives, the ejection force is provided by the spring that holds the cover shut, and therefore the ejection speed is dependent on this spring. In PC-type machines, a floppy disk can be inserted or ejected manually at any time (evoking an error message or even lost data in some cases), as the drive is not continuously monitored for status and so programs can make assumptions that do not match actual status.
With Apple Macintosh computers, disk drives are continuously monitored by the OS; a disk inserted is automatically searched for content, and one is ejected only when the software agrees the disk should be ejected. This kind of disk drive (starting with the slim “Twiggy” drives of the late Apple “Lisa”) does not have an eject button, but uses a motorized mechanism to eject disks; this action is triggered by the OS software (e.g., the user dragged the “drive” icon to the “trash can” icon). Should this not work (as in the case of a power failure or drive malfunction), one can insert a straightened paper clip into a small hole at the drive’s front, there by forcing the disk to eject (similar to that found on CD/DVD drives). Some other computer designs (such as the Commodore Amiga) monitor for a new disk continuously but still have push-button eject mechanisms.
The 3-inch disk, widely used on Amstrad CPC machines, bears much similarity to the 3A½-inch type, with some unique and somewhat curious features. One example is the rectangular-shaped plastic casing, almost taller than a 3A½-inch disk, but narrower, and more than twice as thick, almost the size of a standard compact audio cassette. This made the disk look more like a greatly oversized present day memory card or a standard PC card notebook expansion card rather than a floppy disk. Despite the size, the actual 3-inch magnetic-coated disk occupied less than 50% of the space inside the casing, the rest being used by the complex protection and sealing mechanisms implemented on the disks. Such mechanisms were largely responsible for the thickness, length and high costs of the 3-inch disks. On the Amstrad machines the disks were typically flipped over to use both sides, as opposed to being truly double-sided. Double-sided mechanisms were available but rare.
Universal Serial Bus connectors on the back. These USB connectors let you attach everything from mice to printers to your computer quickly and easily. The operating system supports USB as well, so the installation of the device drivers is quick and easy, too. Compared to other ways of connecting devices to your computer, USB devices are incredibly simple we will look at USB ports from both a user and a technical standpoint. You will learn why the USB system is so flexible and how it is able to support so many devices so easily Anyone who has been around computers for more than two or three years know the problem that the Universal Serial Bus is trying to solve — in the past, connecting devices to computers has been a real headache!
The goal of USB is to end all of these headaches. The Universal Serial Bus gives you a single, standardized, easy-to-use way to connect up to 127 devices to a computer. Just about every peripheral made now comes in a USB version. A sample list of USB devices that you can buy today includes:
In the next section, we’ll look at the USB cables and connectors that allow your computer to communicate with these devices.
A parallel port is a type of interface found on computers (personal and otherwise) for connecting various peripherals. It is also known as a printer port or Centronics port. The IEEE 1284 standard defines the bi-directional version of the port.
Before the advent of USB, the parallel interface was adapted to access a number of peripheral devices other than printers. Probably one of the earliest devices to use parallel were dongles used as a hardware key form of software copy protection. Zip drives and scanners were early implementations followed by external modems, sound cards, webcams, gamepads, joysticks and external hard disk drives and CD-ROM drives. Adapters were available to run SCSI devices via parallel. Other devices such as EPROM programmers and hardware controllers could be connected parallel.
At the consumer level, the USB interface—and in some cases Ethernet—has effectively replaced the parallel printer port. Many manufacturers of personal computers and laptops consider parallel to be a legacy port and no longer include the parallel interface. USB to parallel adapters are available to use parallel-only printers with USB-only systems. However, due to the simplicity of its implementation, it is often used for interfacing with custom-made peripherals. In versions of Windows that did not use the Windows NT kernel (as well as DOS and some other operating systems)
Keyboard, in computer science, a keypad device with buttons or keys that a user presses to enter data characters and commands into a computer. They are one of the fundamental pieces of personal computer (PC) hardware, along with the central processing unit (CPU), the monitor or screen, and the mouse or other cursor device.
The most common English-language key pattern for typewriters and keyboards is called QWERTY, after the layout of the first six letters in the top row of its keys (from left to right). In the late 1860s, American inventor and printer Christopher Shoals invented the modern form of the typewriter. Shoals created the QWERTY keyboard layout by separating commonly used letters so that typists would type slower and not jam their mechanical typewriters. Subsequent generations of typists have learned to type using QWERTY keyboards, prompting manufacturers to maintain this key orientation on typewriters.
Computer keyboards copied the QWERTY key layout and have followed the precedent set by typewriter manufacturers of keeping this convention. Modern keyboards connect with the computer CPU by cable or by infrared transmitter. When a key on the keyboard is pressed, a numeric code is sent to the keyboard’s driver software and to the computer’s operating system software. The driver translates this data into a specialized command that the computer’s CPU and application programs understand. In this way, users may enter text, commands, numbers, or other data. The term character is generally reserved for letters, numbers, and punctuation, but may also include control codes, graphical symbols, mathematical symbols, and graphic images.
Almost all standard English-language keyboards have keys for each character of the American Standard Code for Information Interchange (ASCII) character set, as well as various function keys. Most computers and applications today use seven or eight data bits for each character. For example, ASCII code 65 is equal to the letter A. The function keys generate short, fixed sequences of character codes that instruct application programs running on the computer to perform certain actions. Often, keyboards also have directional buttons for moving the screen cursor, separate numeric pads for entering numeric and arithmetic data, and a switch for turning the computer on and off. Some keyboards, including most for laptop computers, also incorporate a trackball, mouse pad, or other cursor-directing device. No standard exists for positioning the function, numeric, and other buttons on a keyboard relative to the QWERTY and other typewriting keys. Thus layouts vary on keyboards.
In the 1930s, American educators August Dvorak and William Dearly designed this key set so that the letters that make up most words in the English language are in the middle row of keys and are easily reachable by a typist’s fingers. Common letter combinations are also positioned so that they can be typed quickly. Most keyboards are arranged in rectangles, left to right around the QWERTY layout. Newer, innovative keyboard designs are more ergonomic in shape. These keyboards have separated banks of keys and are less likely to cause carpal tunnel syndrome, a disorder often caused by excessive typing on less ergonomic keyboards.
Most computer monitors use a cathode-ray tube (CRT) as the display device. A CRT is a glass tube that is narrow at one end and opens to a flat screen at the other end. The CRTs used for monitors have rectangular screens, but other types of CRTs may have circular or square screens. The narrow end of the CRT contains a single electron gun for a monochrome, or single-colour monitor, and three electron guns for a colour monitor—one electron gun for each of the three primary colours: red, green, and yellow. The display screen is covered with tiny phosphor dots that emit light when struck by electrons from an electron gun.
Monochrome monitors have only one type of phosphor dot while colour monitors have three types of phosphor dots, each emitting red, green, or blue light. One red, one green, and one blue phosphor dot are grouped together into a single unit called a picture element, or pixel. A pixel is the smallest unit that can be displayed on the screen. Pixels are arranged together in rows and columns and are small enough that they appear connected and continuous to the eye.
Electronic circuitry within the monitor controls an electromagnet that scans and focuses electron beams onto the display screen, illuminating the pixels. Image intensity is controlled by the number of electrons that hit a particular pixel. The more electrons that hit a pixel, the more light the pixel emits. The pixels, illuminated by each pass of the beams, create images on the screen. Variety of colour and shading in an image is produced by carefully controlling the intensity of the electron beams hitting each of the dots that make up the pixels. The speed at which the electron beams repeat a single scan over the pixels is known as the refresh rate. Refresh rates are usually about 60 times a second.
Monochrome monitors display one colour for text and pictures, such as white, green, or amber, against a dark colour, such as black, for the background. Gray-scale monitors are a type of monochrome monitor that can display between 16 and 256 different shades of grey.
Manufacturers describe the quality of a monitor’s display by dot pitch, which is the amount of space between the centres of adjacent pixels. Smaller dot pitches mean the pixels are more closely spaced and the monitor will yield sharper images. Most monitors have dot pitches that range from 0.22 mm (0.008 in) to 0.39 mm (0.015 in).
The screen size of monitors is measured by the distance from one corner of the display to the diagonally opposite corner. A typical size is 38 cm (15 in), with most monitors ranging in size from 22.9 cm (9 in) to 53 cm (21 in). Standard monitors are wider than they are tall and are called landscape monitors. Monitors that have greater height than width are called portrait monitors.
The amount of detail, or resolution, that a monitor can display depends on the size of the screen, the dot pitch, and on the type of display adapter used. The display adapter is a circuit board that receives formatted information from the computer and then draws an image on the monitor, displaying the information to the user. Display adapters follow various standards governing the amount of resolution they can obtain. Most colour monitors are compatible with Video Graphics Array (VGA) standards, which are 640 by 480 pixels (640 pixels on each of 480 rows), or about 300,000 pixels. VGA yields 16 colours, but most modern monitors display far more colours and are considered high resolution in comparison. Super VGA (SVGA) monitors have 1024 by 768 pixels (about 800,000) and are capable of displaying more than 60,000 different colours. Some SVGA monitors can display more than 16 million different colours.
A monitor is one type of computer display, defined by its CRT screen. Other types of displays include flat, laptop computer screens that often use liquid-crystal displays (LCDs). Other thin, flat-screen monitors that do not employ CRTs are currently being developed.
Printer, a computer peripheral that puts text or a computer-generated image on paper or on another medium, such as a transparency. Printers can be categorized in any of several ways. The most common distinction is impact vs. non impact. Impact printers physically strike the paper and are exemplified by pin dot-matrix printers and daisy-wheel printers; non impact printers include every other type of print mechanism, including laser, ink-jet, and thermal printers. Other possible methods of categorizing printers include (but are not limited to) the following:
Print technology: Chief among these, with microcomputers, are pin dot-matrix, ink-jet, laser, thermal, and (although somewhat outdated) daisy-wheel or thimble printers. Pin dot-matrix printers can be further classified by the number of pins in the print head: 9, 18, 24, and so on.
Character formation: Fully formed characters made of continuous lines (for example, those produced by a daisy-wheel printer) vs. dot-matrix characters composed of patterns of dots (such as those produced by standard dot-matrix, ink-jet, and thermal printers). Laser printers, while technically dot-matrix, are generally considered to produce fully formed characters because their output is very clear and the dots are extremely small and closely spaced.
Method of transmission: parallel (byte-by-byte transmission) vs. serial (bit-by-bit transmission). These categories refer to the means by which output is sent to the printer rather than to any mechanical distinctions. Many printers are available in either serial or parallel versions, and still other printers offer both choices, yielding greater flexibility in installation options.
Method of printing: Character by character, line by line, or page by page. Character printers include standard dot-matrix, ink-jet, thermal, and daisy-wheel printers. Line printers include the band, chain, and drum printers that are commonly associated with large computer installations or networks. Page printers include the electro photographic printers, such as laser printers.
Print capability: Text-only vs. text-and-graphics. Text-only printers, including most daisy-wheel and thimble printers and some dot-matrix and laser printers, can reproduce only characters for which they have matching patterns, such as embossed type, or internal character maps. Text-and-graphics printers—dot-matrix, ink-jet, laser, and others—can reproduce all manner of images by “drawing” each as a pattern of dots.
Mouse, a common pointing device, popularized by its inclusion as standard equipment with the Apple Macintosh. With the rise in popularity of graphical user interfaces (Graphical User Interface) in MS-DOS; UNIX, and OS/2, use of mice is growing throughout the personal computer and workstation worlds. The basic features of a mouse are a casing with a flat bottom, designed to be gripped by one hand; one or more buttons on the top; a multidirectional detection device (usually a ball) on the bottom; and a cable connecting the mouse to the computer. See the illustration. By moving the mouse on a surface (such as a desk), the user typically controls an on-screen cursor. A mouse is a relative pointing device because there are no defined limits to the mouse’s movement and because its placement on a surface does not map directly to a specific screen location. To select items or choose commands on the screen, the user presses one of the mouse’s buttons, producing a “mouse click.”
Types of mouse – bus mouse; Mechanical Mouse; Optical Mouse; Optimechanical Mouse; Serial Mouse; Trackball.
A Network interface Card (NIC) is a hardware device that handles an interface to a computer network and allows a network-capable device to access that network. The NIC has a ROM chip that contains a unique number, the multiple access control (MAC) Address that is permanent. The MAC address identifies the device uniquely on the LAN. The NIC exists on both the ‘Physical Layer’ (Layer 1) and the ‘Data Link Layer’ (Layer 2) of the OSI model.
Sometimes the words ‘controller’ and ‘card’ are used interchangeably when talking about networking because the most common NIC is the network interface card. Although ‘card’ is more commonly used, it is less encompassing. The ‘controller’ may take the form of a network card that is installed inside a computer, or it may refer to an embedded component as part of a computer motherboard, a router, expansion card, printer interface or a USB device.
A MAC address is a 48-bit network hardware identifier that is permanently set on a ROM chip on the NIC to identify that device on the network. The first 24-bit field is called the Organizationally Unique Identifier (OUI) and is largely manufacturer-specific. Each OUI allows for 16,777,216 Unique NIC Addresses. Smaller manufacturers that do not have a need for over 4096 unique NIC addresses may opt to purchase an Individual Address Block (IAB) instead. An IAB consists of the 24-bit OUI plus a 12-bit extension (taken from the ‘potential’ NIC portion of the MAC address.)
Although other network technologies exist, Ethernet has achieved near-ubiquity since the mid-1990s. Every Ethernet network card has a unique 48-bit serial number called a MAC address, which is stored in ROM carried on the card. Every computer on an Ethernet network must have a card with a unique MAC address. Normally it is safe to assume that no two network cards will share the same address, because card vendors purchase blocks of addresses from the Institute of Electrical and Electronics Engineers (IEEE) and assign a unique address to each card at the time of manufacture.
Whereas network cards used to be expansion cards that plug into a computer bus, the low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard. These either have Ethernet capabilities integrated into the motherboard chipset or implemented via a low cost dedicated Ethernet chip, connected through the PCI (or the newest PCI Express) bus. A separate network card is not required unless multiple interfaces are needed or some other type of network is used. Newer motherboards may even have dual network (Ethernet) interfaces built-in.
The card implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard such as Ethernet or token ring. This provides a base for a full network protocol stack, allowing communication among small groups of computers on the same LAN and large-scale network communications through routable protocols, such as IP.
The 8P8C (8 Position 8 Contact, also backronymed as 8 position 8 conductor; often incorrectly called RJ45) is a modular connector commonly used to terminate twisted pair and multi conductor flat cable. These connectors are commonly used for Ethernet over twisted pair, Registered jacks and other telephone applications, RS-232 serial using the EIA/TIA 561 and Yost standards, and other applications involving unshielded twisted pair, shielded twisted pair, and multi conductor flat cable.
An 8P8C modular connector has two paired components: the male plug and the female jack, each with eight equally-spaced conducting channels. On the plug, these conductors are flat contacts positioned parallel with the connector body. Inside the jack, the conductors are suspended diagonally toward the insertion interface. When an 8P8C plug is mated with an 8P8C jack, the conductors meet and create an electrical connection. Spring tension in the jack’s conductors ensures a good interface with the plug and allow for slight travel during insertion and removal.
Although commonly referred to as an RJ45 in the context of Ethernet and category 5 cables, it is technically incorrect to refer to a generic 8P8C connector as an RJ45. The registered jack (RJ) standard specifies a different mechanical interface and wiring scheme for a true RJ45 than TIA/EIA-568-B which is often used for modular connectors used in Ethernet and telephone applications. 8P8C modular plugs and jacks look very similar to the plugs and jacks used for FCC’s registered jack RJ45 variants, although the true and extremely uncommon RJ45 is not compatible with 8P8C modular connectors. It neither uses all eight conductors (but only two of them for a pair of wires plus two for a programming resistor) nor does it fit into an 8P8C jack because the true RJ45 plug is “keyed”
Originally, there was only the true telephone RJ45. It is one of the many registered jacks, like RJ11, a standard from which it gets the “RJ” in its name. As a registered jack, true telephone RJ45 specifies both the physical connector and wiring pattern. The true telephone RJ45 uses a special, keyed 8P2C modular connector, with Pins 5 and 4 wired for tip and ring of a single telephone line and Pins 7 and 8 connected to a programming resistor. It is meant to be used with a high speed modem, and is obsolete today.
Telephone installers who installed true telephone RJ45 jacks in the past were familiar with the inner workings which made it RJ45, but their clients saw only a hole in the wall of a particular shape, and came to understand RJ45 as the name for a hole of that shape. When they found similar-looking connectors being used in entirely non-telephone applications, usually connecting computers, they called these “RJ45”, too. This was therefore the so-called computer “RJ45”.
Compounding the problem was the fact that the physical connectors indicated by true telephone RJ45 are not even compatible with computer “RJ45” connectors. True telephone RJ45 connectors are a special variant of 8P2C, meaning only the middle 2 positions have conductors in them, while pins 7 and 8 are shorting a programming resistor. Computer “RJ45” is 8P8C – all eight conductors are always present. Furthermore, true telephone RJ45 involves a “keyed” variety of the 8P body, which means it may have an extra tab that a computer “RJ45” connector is unable to mate with.
Because true telephone RJ45 never saw wide usage and computer “RJ45” has become well known today, computer “RJ45” is almost always what a person is referring to when they say “RJ45”. Electronics catalogs not specialized to the telephone industry advertise 8P8C modular connectors as “RJ45”. Virtually all electronic equipment that uses an 8P8C connector (or possibly any 8P connector at all) will document it as an “RJ45” connector.
Rounding out the confusion in “RJ45” naming is the fact that some people intend for the term to encompass not just the connector shape and size, but the wiring standard for it described by TIA/EIA-568-B as well. So one might find “Here is the pin out of an RJ45 jack.”
8P8C are commonly used in computer networking and telephone applications, where the plug on each end is an 8P8C modular plug wired according to a TIA/EIA standard. Most network communications today are carried over Category 5e or Category 6 cable with an 8P8C modular plug crimped on each end. The 8P8C modular connector is also used for RS-232 serial interfaces according to the EIA/TIA-561 standard. This application is commonly used as a console interface on network equipment such as switches and routers. Other applications include other networking services such as ISDN and T1.In flood wired environments the center (blue) pair is often used to carry telephony signals. Where so wired, the physical layout of the 8P8C modular jack allows for the insertion of an RJ11 plug in the center of the jack, provided the RJ11 plug is wired in true compliance with the U.S. telephony standards (RJ11) using the center pair. The formal approach to connect telephony equipment is the insertion of a type-approved converter.
The remaining (brown) pair is increasingly used for Power over Ethernet. Legacy equipment may use just this pair; this conflicts with other equipment as manufacturers used to short circuit unused pairs to reduce signal crosstalk. Some routers/bridges/switches can be powered by the unused 4 lines — blues (+) and browns (-) — to carry current to the unit. There is now a standardized scheme for Power over Ethernet.
Different manufacturers of 8P8C modular jacks arrange for the pins of the 8P8C modular connector jack to be linked to wire connectors (often IDC type terminals) that are in a different physical arrangement from that of other manufacturers: Thus, for example, if a technician is in the habit of connecting the white/orange wire to the “bottom right hand” IDC terminal, which links it to 8P8C modular connector pin 1, in jacks made by other manufacturers this terminal may instead connect to 8P8C modular connector pin 2 (or any other pin).
A network hub or repeater hub is a device for connecting multiple twisted pair or fiber optic Ethernet devices together and thus making them act as a single network segment. Hubs work at the physical layer (layer 1) of the OSI model. The device is thus a form of multiport repeater. Repeater hubs also participate in collision detection, forwarding a jam signal to all ports if it detects a collision.
Hubs also often come with a BNC and/or AUI connector to allow connection to legacy 10BASE2 or 10BASE5 network segments. The availability of low-priced network switches has largely rendered hubs obsolete but they are still seen in older installations and more specialized applications.
A network hub is a fairly unsophisticated broadcast device. Hubs do not manage any of the traffic that comes through them, and any packet entering any port is broadcast out on all other ports. Since every packet is being sent out through all other ports, packet collisions result—which greatly impedes the smooth flow of traffic.
The need for hosts to be able to detect collisions limits the number of hubs and the total size of the network. For 10 Mbit/s networks, up to 5 segments (4 hubs) are allowed between any two end stations. For 100 Mbit/s networks, the limit is reduced to 3 segments (2 hubs) between any two end stations, and even that is only allowed if the hubs are of the low delay variety. Some hubs have special (and generally manufacturer specific) stack ports allowing them to be combined in a way that allows more hubs than simple chaining through Ethernet cables, but even so, a large Fast Ethernet network is likely to require switches to avoid the chaining limits of hubs.
Most hubs (intelligent hubs) detect typical problems, such as excessive collisions on individual ports, and partition the port, disconnecting it from the shared medium. Thus, hub-based Ethernet is generally more robust than coaxial cable-based Ethernet, where a misbehaving device can disable the entire collision domain. Even if not partitioned automatically, an intelligent hub makes troubleshooting easier because status lights can indicate the possible problem source or, as a last resort, devices can be disconnected from a hub one at a time much more easily than a coaxial cable. They also remove the need to troubleshoot faults on a huge cable with multiple taps.
Hubs classify as Layer 1 devices in the OSI model. At the physical layer, hubs can support little in the way of sophisticated networking. Hubs do not read any of the data passing through them and are not aware of their source or destination. Essentially, a hub simply receives incoming packets, possibly amplifies the electrical signal, and broadcasts these packets out to all devices on the network – including the one that originally sent the packet.
Technically speaking, three different types of hubs exist:
Passive hubs do not amplify the electrical signal of incoming packets before broadcasting them out to the network. Active hubs, on the other hand, do perform this amplification, as does a different type of dedicated network device called a repeater. Another, not so common, name for the term concentrator is referring to a passive hub and the term multiport repeater is referred to an active hub.
Intelligent hubs add extra features to an active hub that are of particular importance to businesses. An intelligent hub typically is stackable (built in such a way that multiple units can be placed one on top of the other to conserve space). It also typically includes remote management capabilities via Simple Network Management Protocol (SNMP) and virtual LAN (VLAN) support.
Historically, the main reason for purchasing hubs rather than switches was their price. This has largely been eliminated by reductions in the price of switches, but hubs can still be useful in special circumstances:
Turn off the computer and unplug it from the wall outlet. Place the computer on a worktable. Open the computer case by unscrewing the two or four screws located on the back of the computer. The exact number of screws will depend on what kind of case the computer has.
Examine the motherboard to find out which types of card slots are open for the installation of the network card. Typically, there are three types of interface slots available on motherboards: ISA, PCA and AGP.
Check your network card for which type of slot it requires. Write down some of the information on the network card before installing. Write down the MAC address for this card. Unscrew the screw holding the plate over the unused slot on the motherboard and press the edge connector down in to the empty socket on the motherboard.
Press firmly to make sure that the card seats well. Replace the screw to hold the card in place and close the computer case in the reverse order of how you opened it.
Plug the computer back into the wall outlet and restart the computer. Wait for Windows to boot and to find the new hardware. If this occurs, then respond to its inquiries for a driver by placing the driver disk in the drive and clicking “Yes.”
Open the Control Panel and locate the Network Connections icon. Find the network adapter, right click and choose “Properties.” Look for the MAC address of the card and compare it with the number taken from the card before installation. They should be the same.
Check the services running on this network adapter. You should have the Client for Microsoft Networks and TCP/IP protocol checked and installed. If these are not present, install them at this time.
Take the LAN cable and connect one end in the back of the computer. Use the cable socket that is on the back of the network card just installed. Have your IT department configure your workstation for the workgroup your computer is a member of. If you are not joining a workgroup but are accessing the Internet via broadband, follow the instructions from your lecturer.
Then, double-click the icon of the control panel and clicks add/card. If asked, enter the Input & Output port that you chose before. Reboot windows, and then once again change the parameters, because it may not have accepted them. Then reboot again. Once you have finally restarted Windows, check the control panel. If the card appears with a yellow exclamation point, this means that there is a conflict, and you will have to change the IRQ.
Use correct cables and proper fitted.
You must see the green light when you connect with switch.
select IP address
give default Gateway 192.168.1.1
PC 1 2
PC 2 3
S mask 255.255.255.0
Then try to ping each other.
try to ping also gateway.
This must help you.
if it does not work, then look your switch, it might be faulty
This advise would be to check that all the cables are the correct type (straight through cables to connect the pc’s to the switches) and make sure that the switches lights are turn green after a minute or so. If the lights on the switch are a different colour – consult the manual that came with the switch to determine the fault
If all lights indicate no problems – make sure that all the ip addresses on the pc’s are in the same network address and have the same subnet mask
COM 01 – 192.168.1.1 subnet mask – 255.255.255.0
COM 02 – 192.168.1.2 subnet mask – 255.255.255.0
COM 3 – 192.168.1.3 subnet mask – 255.255.255.0
if you are using windows operating system (right click on “my computer” > properties > “computer name” tab > click “change” button and make sure the workgroup names are exactly the same on each PC.
Make sure you’ve enabled File and Printer Sharing for Microsoft Networks on network card’s properties.
If the other users using different version of Windows to access your printer, they will need to install printer driver themselves. You can help by installing additional printer drivers on your Windows XP, so that the printer driver will be installed when other users access the shared printer with different version of Windows. Click Additional Drivers, tick additional driver you would like to install. You will be prompted to install those additional drivers after clicking OK.
To implement that dame network I included sufficient bench space, a suitable electrical supply, appropriate lighting and 3 PCs with identified required hardware equipments.
To implement this network we suggested ”Peer to Peer” network is the most suitable network type. So we get the required network hub/switch. Network cables, NICs and suitable ports.
The company need to install Windows XP as an operating system in there actual embarking. Because of that we get the Windows XP licensed software. And they want to share a printer within the network, so we get laser printer with appropriate drive disk.
Narnia Limited is opening a new branch with 150 PCs in the next 3 months. Before there real implementation of network I simulate the same with a small level test implementation at our lab.
As this small implementation we can implement 150 PC network in the company real environment within given time period of three months.
Different computer operating systems have unique rules for the naming of files. Windows 95 (Win95) and disk operating systems (DOS), for instance, make use of an extension attached to the end of each filename in order to indicate the type of file (see Windows). Extensions begin with a period (.), and then have one or more letters. An example of a file extension used in Win95 and DOS is back, which indicates that the file is a backup file. When saving a file, a user can give it any name within the rules of the operating system. In addition, the name must be unique. Two files in the same directory may not have the same name, but some operating systems allow the same name for one file to be placed in more than one location. These additional names are called aliases.
Directory files contain information used to organize other files into a hierarchical structure. In the Macintosh operating system, directory files are called folders. The topmost directory in any file system is the root directory. A directory contained within another directory is called a subdirectory. Directories containing one or more subdirectories are called parent directories. Directory files contain programs or commands that are executable by the computer.
Executable files in Win95 and DOS have a .exe suffix at the end of their name and are often called EXE (pronounced EX-ee) files. Text files contain characters represented by their ASCII (American Standard Code for Information Interchange) codes. These files are often called ASCII (pronounced ASK-ee) files. Files that contain words, sentences, and bodies of paragraphs are frequently referred to as text files.
In2001Microsoftreleased a new operating system known as Windows XP, the company’s first operating system for consumers that was not based on MS-DOS. The same year the company also released Xbox, its first venture into video-game consoles. Microsoft announced a new business strategy in 2001 known as .Net (pronounced dot-net). The strategy sought to enable a variety of hardware devices, from PCs to PDAs to cell phones, to communicate with each other via the Internet, while also automating many computer functions. Confusion over the term .Net led to the adoption of the slogan “seamless computing” in 2003.
Othermajorbusinessdevelopments in the early 21st century included new versions of the Microsoft Network and the development with several major computer manufacturers of the Tablet PC, a laptop computer that featured handwriting-recognition software and a wireless connection to the Internet. In 2003 the company began to focus on “trustworthy computing,” requiring its programmers to improve their skills in protecting software from malicious hacker attacks in the form of computer viruses and worms. In 2004 Microsoft sold its innovative online newsmagazine, Slate, to The Washington Post Company, ending an experiment in online journalism that began in 1996 under Editor Michael Kinsley. InNovember2005Microsoft unveiled its new-generation video game console, the Xbox 360 (see Electronic Games). The new device went beyond gaming, providing consumers with the ability to store and play audio, video, and photo files. The same month Gates and the newly named chief technology officer, Ray Ozzie, announced a new Web services initiative providing software services on the Internet accessible from any browser. The initial components, Windows Live and Office Live, represented a move away from packaged software. The initiatives were to be supported by advertising revenue and subscriptions.
InJune2006Gatesannounced that he would begin transitioning from a full-time role at Microsoft to a full-time role at the Bill & Melinda Gates Foundation. Gates planned to have only a part-time role at Microsoft by July 2008, though he would retain the title of chairman and continue to advise the company on key business developments. As part of the transition, he transferred the title of chief software architect to Ozzie. InNovember2006Microsoft released Vista, its first new operating system since Windows XP was introduced in 2001. The long-anticipated system was first made available to businesses only. A consumer version was released in January 2007. The new system won generally favorable reviews for its improved graphics, search capabilities, and security protection against computer viruses.
Make sure these files and programs are there. Else, your windows won’t function properly.
This file is used by MS-DOS and other operating system at startup. for this file, it does not have to be present, if it is, the commands in this file will be used when the computer goes through startup process. It contains the instruction lines to run any programs automatically at startup for example, Mouse.com or myprograme.exe. It will run during the boot process and also can be run any other time when you call out autoexec at the MS-DOS prompt. For modern operating system, IO.SYS is used.
This is a hidden system file in the root directory of the primary partition. It has the menu which appears each time the PC is started. Like some computer which two operating system, it enable us to choose which the operating system is we want to use. If the user does not react within 30 seconds and the PC will automatically boot to the default OS.BOOT.INI. It can be accessed using notepad but for better way of controlling options for the startup menu is through the System Properties in start> Control Panel.
This command-line processor interprets commands from user. We can change any settings or do some of copy and paste, edit, and many more routine from here.
This file is used by MS-DOS with AUTOEXEC.BAT and other operating system at startup. It can only be run during the boot process of PC. It may not be present but if it does, the commands inside this file will be used when the computer goes through the startup process. It normally contains the instructions to load device drivers (for example CD-ROM drive, Sound card driver).
For Windows 9X system, this file replaced the three main DOS files which is MSDOS.SYS, CONFIG.SYS, and alsl AUTOEXEC.BAT. it is then replaced by NTLDR for Windows XP and also Windows 2000. So you won’t find this file running under Windows XP and Windows 2000.
This program is the controller for software interaction with MS-DOS kernel. Older system, programs, or games which need the use of MS-DOS to load will need MSDOS.SYS. However, for newer programs or systems, normally they don’t need this file to load the program. This is also one of the files needed to boot MS-DOS.
This file will loads the GUI (Graphic User Interface) for Windows 9X by loading this three files: KRNL32.DLL, GDI.EXE and also USER.EXE. This three file must not be deleted or your computer windows will stop function and need to be formatted.
Y5 Global Internet Roaming Service
Y5 Global Internet Roaming Service is a service provided by Y5ZONE for travelers who need high speed Internet access while they are overseas. Y5ZONE partners with iPass, Inc. (NASDAQ:IPAS) to provide this service. iPass manages a global virtual network allowing mobile users to enjoy high quality Internet connection around the world. Once subscribe the service, users can connect to the Internet through iPass’ global virtual network which covers airports, hotels, convention centers, coffee shops etc. This service is provided with 3 connection methods which suit your needs:
Once you have purchased any of the above service plans, you will receive a set of Login ID and password from Y5ZONE. All you need is to install iPassConnect Mobility Manager (software) in your mobile device (e.g. notebook computers, PDAs, smart phones etc). This software has a point-and-click interface, which will allow you to search available networks (Wi-Fi, LAN, dialup access numbers) at your location. Run this iPassConnect Mobility Manager with your Login ID and password to enjoy high speed Wi-Fi or wired broadband.
Asymmetric digital subscriber line (ADSL) is a form of DSL, a data communications technology that enables faster data transmission over copper telephone lines than a conventional voice band modem can provide. It does this by utilizing frequencies that are not used by a voice telephone call. A splitter – or micro filter – allows a single telephone connection to be used for both ADSL service and voice calls at the same time. ADSL can generally only be distributed over short distances from the central office, typically less than 4kilometres (2mi), but has been known to exceed 8kilometres (5mi) if the originally laid wire gauge allows for farther distribution.
At the telephone exchange the line generally terminates at a DSLAM where another frequency splitter separates the voice band signal for the conventional phone network. Data carried by the ADSL is typically routed over the telephone company’s data network and eventually reaches a conventional internet network.
The distinguishing characteristic of ADSL over other forms of DSL is that the volume of data flow is greater in one direction than the other, i.e. it is asymmetric. Providers usually market ADSL as a service for consumers to connect to the Internet in a relatively passive mode: able to use the higher speed direction for the “download” from the Internet but not needing to run servers that would require high speed in the other direction.
There are both technical and marketing reasons why ADSL is in many places the most common type offered to home users. On the technical side, there is likely to be more crosstalk from other circuits at the DSLAM end (where the wires from many local loops are close to each other) than at the customer premises. Thus the upload signal is weakest at the noisiest part of the local loop, while the download signal is strongest at the noisiest part of the local loop. It therefore makes technical sense to have the DSLAM transmit at a higher bit rate than does the modem on the customer end. Since the typical home user in fact does prefer a higher download speed, the telephone companies chose to make a virtue out of necessity, hence ADSL. On the marketing side, limiting upload speeds limits the attractiveness of this service to business customers, often causing them to purchase higher cost Leased line services instead. In this fashion, it segments the digital communications market between business and home users.
Currently, most ADSL communication is full-duplex. Full-duplex ADSL communication is usually achieved on a wire pair by either frequency-division duplex (FDD), echo-cancelling duplex (ECD), or time-division duplexing (TDD). FDD uses two separate frequency bands, referred to as the upstream and downstream bands. The upstream band is used for communication from the end user to the telephone central office. The downstream band is used for communicating from the central office to the end user.
Frequency plan for ADSL. The red area is the frequency range used by normal voice telephony (PSTN), the green (upstream) and blue (downstream) areas are used for ADSL.
With standard ADSL (annex A), the band from 25.875kHz to 138kHz is used for upstream communication, while 138kHz – 1104kHz is used for downstream communication. Each of these is further divided into smaller frequency channels of 4.3125kHz. These frequency channels are sometimes termed bins. During initial training, the ADSL modem tests each of the bins to establish the signal-to-noise ratio at each bin’s frequency. The distance from the telephone exchange and the characteristics of the cable mean that some frequencies may not propagate well, and noise on the copper wire, interference from AM radio stations and local interference and electrical noise at the customer end mean that relatively high levels of noise are present at some frequencies, so considering both effects the signal-to-noise ratio in some bins (at some frequencies) may be good or completely inadequate. A bad signal-to-noise ratio measured at certain frequencies will mean that those bins will not be used, resulting in a reduced maximum link capacity but with an otherwise functional ADSL connection.
The DSL modem will make a plan on how to exploit each of the bins sometimes termed “bits per bin” allocation. Those bins that have a good signal-to-noise ratio (SNR) will be chosen to transmit signals chosen from a greater number of possible encoded values (this range of possibilities equating to more bits of data sent) in each main clock cycle. The number of possibilities must not be so large that the receiver might mishear which one was intended in the presence of noise. Noisy bins may only be required to carry as few as two bits, a choice from only one of four possible patterns, or only one bit per bin in the case of ADSL2+, and really noisy bins are not used at all. If the pattern of noise versus frequencies heard in the bins changes, the DSL modem can alter the bits-per-bin allocations, in a process called “bitswap”, where bins that have become noisier are only required to carry fewer bits and other channels will be chosen to be given a higher burden. The data transfer capacity the DSL modem therefore reports is determined by the total of the bits-per-bin allocations of all the bins combined. Higher signal-to-noise ratios and more bins being in use gives a higher total link capacity, while lower signal-to-noise ratios or fewer bins being used gives a low link capacity.
Dial-up Internet access is a form of Internet access that uses telephone lines. The user’s computer or router uses an attached modem connected to a telephone line to dial into an Internet service provider’s (ISP) node to establish a modem-to-modem link, which is then used to route Internet Protocol packets between the user’s equipment and hosts on the Internet.
The term was coined during the early days of computer telecommunications when modems were needed to connect dumb terminals or computers running terminal emulator software to mainframes, minicomputers, online services and bulletin board systems via a telephone line.
Dial-up connections to the Internet require no infrastructure other than the telephone network. As telephone access is widely available, dial-up remains useful to travelers. Dial-up is usually the only choice available for rural or remote areas where broadband installations are not prevalent due to low population and demand. Dial-up access may also be an alternative for users on limited budgets as it is offered for free by some ISPs, though broadband is increasingly available at lower prices in many countries due to market competition.
Dial-up requires time to establish a usable telephone connection (several seconds, depending on the location) and perform handshaking for protocol synchronization before data transfers can take place. In locales with telephone connection charges, each connection incurs an incremental cost. If calls are time-metered, the duration of the connection incurs costs.
Dial-up access is a transient connection, because either the user or the ISP terminates the connection. Internet service providers will often set a limit on connection durations to prevent hogging of access, and will disconnect the user — requiring reconnection and the costs and delays associated with it. Technically-inclined users often find a way to disable the auto-disconnect program such that they can remain connected for days. This is particularly useful for downloading large files such as videos.
A 2008 Pew Internet and American Life Project study states that only 10 percent of American adults still use dial-up Internet access. Reasons for retaining dial-up access span from lack of infrastructure to high broadband prices.
Modern dial-up modems typically have a maximum theoretical transfer speed of 56 kbit/s (using the V.90 or V.92 protocol), although in most cases 40-50 kbit/s is the norm. Factors such as phone line noise as well as the quality of the modem itself play a large part in determining connection speeds. Some connections may be as low as 20 kbit/s in extremely “noisy” environments, such as in a hotel room where the phone line is shared with many extensions.
Dial-up connections usually have latency as high as 400 ms or even more, which can make online gaming or video conferencing difficult, if not impossible. First person shooter style games are the most sensitive to latency, making playing them impractical on dial-up. However, some games such as Star Wars: Galaxies, The Sims Online, Warcraft 3, Guild Wars, Unreal Tournament, Halo: Combat Evolved, and Audition are capable of running on 56k dial-up.
An increasing amount of Internet content such as streaming media will not work at dialup speeds.
As telephone-based 56 kbit/s modems began losing popularity, some Internet Service Providers such as Netzero, TOAST.net, and Earthlink started using pre-compression to increase the throughput and maintain their customer base. As an example, Netscape ISP uses a compression program that squeezes images, text, and other objects at the server, just prior to sending them across the phone line. The server-side compression operates much more efficiently than the “on-the-fly” compression of V.44-enabled modems. Typically website text is compacted to 5% thus increasing effective throughput to approximately 1000 kbit/s, and images are lossy-compressed to 15-20% increasing throughput to about 350 kbit/s.
The drawback of this approach is a loss in quality, where the graphics acquire more compression artifacts taking-on a blurry appearance, however the speed is dramatically improved and the user can manually choose to view the uncompressed images at any time. The ISPs employing this approach advertise it as “DSL speeds over regular phone lines” or simply “high speed dialup”.
However, many areas still remain without high speed Internet despite the eagerness of potential customers. This can be attributed to population, location, or sometimes ISPs’ lack of interest due to little chance of profitability and high costs to build the required infrastructure. Some Dialup ISPs have responded to the increased competition by lowering their rates to as low as $5 a month making dialup an attractive option for those who merely want email access or basic web browsing
There is currently a lot of concern that Microsoft is monopolizing key areas of the software industry, and seeking to leverage that monopoly to control or dominate many areas of electronic commerce and internet development. One way to deal with this is to avoid using Microsoft products, including the operating system itself. But even this isn’t the only reason to consider alternative operating systems. Pretty much all of the systems listed below have many advantages over a Windows environment. Typical advantages are a more stable and fault tolerant system (far fewer crashes or system freezes), faster execution, easier maintenance, greater power, and a cooler graphical interface.
A professional writer will make a clear, mistake-free paper for you!Get help with your assigment
Please check your inbox
I'm Chatbot Amy :)
I can help you save hours on your homework. Let's start by finding a writer.Find Writer