StudyDriver in your
Smartphone!

# The Art and Science

Check out more papers on Computer Engineering Computer Design

### Introduction

Electronics is the art and science of getting electrons to move in the way we want to do useful work. An electron is a sub-atomic particle that carries a charge of energy. Each electron is so small that it can carry only a tiny bit of energy. The charge of electrons is measured in coulombs. One coulomb is the charge carried by 6,250,000,000,000,000,000 electrons. That’s 6.25 X 1018 electrons for you math whizzes. Every good electronics text begins with this definition. The term coulomb is rarely used past page 3 of electronics texts and almost never in the actual building of electronic circuits.

Don't use plagiarized sources. Get your custom essay on

“The Art and Science”

Electrons are a component of atoms. There are about 100 different kinds of atoms. Each different kind is called an element. Some elements have structures that hold tight to their electrons so that it is very hard to make the electrons move.

In science, technology, business, and, in fact, most other fields of endeavor, we are constantly dealing with quantities. Quantities are measured, monitored, recorded, manipulated arithmetically, observed, or in some other way utilized in most physical systems. It is important when dealing with various quantities that we be able to represent their values efficiently and accurately. There are basically two ways of representing the numerical value of quantities: analog and digital

Analogue/Analog[4] electronics are those electronic systems with a continuously variable signal. In contrast, in digital electronics signals usually take only two different levels. In analog representation a quantity is represented by a voltage, current, or meter movement that is proportional to the value of that quantity. Analog quantities such as those cited above have n important characteristic: they can vary over a continuous range of values.

In digital representation the quantities are represented not by proportional quantities but by symbols called digits. As an example, consider the digital watch, which provides the time of day in the form of decimal digits which represent hours and minutes (and sometimes seconds). As we know, the time of day changes continuously, but the digital watch reading does not change continuously; rather, it changes in steps of one per minute (or per second). In other words, this digital representation of the time of day changes in discrete steps, as compared with the representation of time provided by an analog watch, where the dial reading changes continuously.

Digital electronics that deals with “1s and 0s”, but that’s a vast oversimplification of the in and outs of “going digital. Digital electronics operates on the premise that all signals have two distinct levels. Depending on what types of devices are in use, the levels may be certain voltages or voltage ranges near the power supply level and ground. The meaning of those signal levels depends on the circuit design, so don’t mix the logical meaning with the physical signal. Here are some common terms used in digital electronics:

• Logical-refers to a signal or device in terms of its meaning, such as “TRUE” or “FALSE”
• Physical-refers to a signal in terms of voltage or current or a device’s physical characteristics
• HIGH-the signal level with the greater voltage
• LOW-the signal level with the lower voltage
• TRUE or 1-the signal level that results from logic conditions being met
• FALSE or 0-the signal level that results from logic conditions not being met
• Active High-a HIGH signal indicates that a logical condition is occurring
• Active Low-a LOW signal indicates that a logical condition is occurring

### Number Systems

Digital logic may work with “1s and 0s”, but it combines them into several different groupings that form different number systems. Most of are familiar with the decimal system, of course. That’s a base-10 system in which each digit represents a power of ten. There are some other number system representations,

• Binary-base two (each bit represents a power of two), digits are 0 and 1, numbers are denoted with a ‘B’ or ‘b’ at the end, such as 01001101B (77 in the decimal system)
• Hexadecimal or ‘Hex’-base 16 (each digit represents a power of 16), digits are 0 through 9 plus A-B-C-D-E-F representing 10-15, numbers are denoted with ‘0x’ at the beginning or ‘h’ at the end, such as 0x5A or 5Ah (90 in the decimal system) and require four binary bits each. A dollar sign preceding the number (\$01BE) is sometimes used, as well.
• Binary-coded decimal or BCD-a four-bit number similar to hexadecimal, except that the decimal value of the number is limited to 0-9.
• Decimal-the usual number system. When used in combination with other numbering systems, decimal numbers are denoted with ‘d’ at the end, such as 23d.
• Octal-base eight (each digit represents a power of 8), digits are 0-7, and each requires three bits. Rarely used in modern designs.

### Digital Construction Techniques

Building digital circuits is somewhat easier than for analog circuits-there are fewer components and the devices tend to be in similarly sized packages. Connections are less susceptible to noise. The trade-off is that there can be many connections, so it is easy to make a mistake and harder to find them. Due to the uniform packages, there are fewer visual clues.

### Prototyping Boards

Prototypes is nothing but putting together some temporary circuits, or, as part of the exercises using a common workbench accessory known as a prototyping board. A typical board is shown in below figure with a DIP packaged IC plugged into the board across the center gap. The board consists of sets of sockets in rows that are connected together so that component leads can be plugged in and connected without soldering. The long rows of sockets on the outside edges of the board are also connected together and these are generally used for power supply and ground connections that are common to many components.

Try to be very systematic in assembling your wiring layout on the prototype board, laying out the components approximately as shown on the schematic diagram.

IC pins are almost always arranged so that pin 1 is in a corner or by an identifying mark on the IC body and the sequence increases in a counter-clockwise sequence looking down on the IC or “chip” as shown in Figure 1. For most DIP packages, the identifying mark is a semi-circular depression in the middle of one end of the package or a round pit or dot in the corner marking pin 1. Both are shown in the figure, but only one is likely to be used on any given IC. When in doubt, the manufacturer of an IC will have a drawing on the data sheet and those can usually be found by entering “[part number] data sheet” into an Internet search engine.

### Powering Digital Logic

Where analog electronics is usually somewhat flexible in its power requirements and tolerant of variations in power supply voltage, digital logic is not nearly so carefree. Whatever logic family you choose, you will need to regulate the power supply voltages to at least A±5 percent, with adequate filter capacitors to filter out sharp sags or spikes.

Logic devices depend on stable power supply voltages to provide references to the internal electronics that sense the high or low voltages and act on them as logic signals. If the power supply voltage is not well regulated or if the device’s ground voltage is not kept close to 0 V, then the device can become confused and misinterpret the inputs, causing unexpected or temporary changes in signals known as glitches. These can be very hard to troubleshoot, so insuring that the power supply is clean is well worth the effort. A good technique is to connect a 10 ~ 100 AµF electrolytic or tantalum capacitor and a 0.1 AµF ceramic capacitor in parallel across the power supply connections on your prototyping board.

In computational devices, for example computers and digital signal processing elements, a fast parallel binary adder is essential. In many cases, to obtain sum and carry within one clock cycle is important. In the case of a pipeline, the latency of the pipeline is often expected to be as small as possible. The speed limitation of a parallel binary adder comes from its carry propagation delay. The maximum carry propagation delay is the delay of its overflow carry. In a ripple carry adder, the evaluation time T for the overflow carry is equal to the product of the delay T1 of each single-bit carry evaluation stage and the total bit number n. In order to improve speed, carry look ahead strategies are widely used.

Since then, such a tree is often combined with other techniques, for example with carry select and Manchester carry chain, as described in the’article “A spanning tree carry look ahead adder by Thomas Lynch and Earl E. Swartzlander, Jr., IEEE Transactions on Computers, vol. 41, pp. 931-939, August 1992. In their strategy, a 64-bit adder is divided into eight 8-bit adders, seven of them are carry-selected, so the visible levels of the binary carry tree are reduced. The visible levels are further reduced by using sixteen, four and two 4-bit Manchester carry chain modules in the first, the second and the third levels respectively. Finally, seven carries are obtained from the carry tree for the seven 8-bit adders to select their SUMs. In this solution, the true level number is hidden by the 4-bit Manchester module which is equivalent to two levels in a binary tree. The non-uniformity of the internal loading still exists but is hidden by the high radix, for example the fan outs of four Manchester modules in the second level are 1, 2, 3 and 4 respectively.

An object of the invention is to provide a parallel binary adder architecture which offers a superior speed, a uniform loading, a regular layout and a flexible configuration in the trade-off between speed, power and area compared with existing parallel binary adder architectures. Another object of the invention is to provide an advanced CMOS circuit technique which-offers an ultrafast speed particularly for a one-clock cycle decision. The combination of the two objects offers a very high performance parallel binary adder.

The first object of the invention is achieved with the invented Distributed-Binary-Look ahead Carry (DBLC) adder architecture which is an arrangement of the kind set forth in the characterising clause of Claim 1. The second object is achieved by the invented clock-and-data precharged dynamic CMOS circuit technique which is an arrangement of the kind set forth in the characterising clause of Claim 2. Further features and further developments of the invented arrangements are set forth in other characterising clauses of section.

A carry look-ahead adder capable of adding or subtracting two input signals includes first stage logic having a plurality of carry-create and carry-transmit logic circuits each coupled to receive one or more bits of each input signal. Each carry-create circuit generates a novel carry-create signal in response to corresponding first bit-pairings of the input signals, and each carry-transmit circuit generates a novel carry-transmit signal in response to corresponding second bit-pairings of the input signals. The carry-create and carry-transmit signals are combined in carry look-ahead logic to generate accumulated carry-create signals, which are then used to select final sum bits.

### Binary Arithmetic-Circuits

Binary arithmetic is a combinatorial problem. It may seem trivial to use the methods we have already seen for designing combinatorial circuits to obtain circuits for binary arithmetic.

However, there is a problem. It turns out that the normal way of creating such circuits would often use up way too many gates. We must search for different ways.

Adder circuits can be classified as,

A half adder can add two bits. It has two inputs, generally labeled A and B, and two outputs, the sum S and carry C. S is the two-bit XOR of A and B, and C is the AND of A and B. Essentially the output of a half adder is the sum of two one-bit numbers, with C being the most significant of these two outputs.

A full adder is a combinatorial circuit (or actually two combinatorial circuits) of three inputs and two outputs. Its function is to add two binary digits plus a carry from the previous position, and give a two-bit result, the normal output and the carry to the next position. Here, we have used variable names x and y for the inputs, c-in for the carry-in, s for the sum output and c-out for the carry-out.

As you can see, the depth of this circuit is no longer two, but considerably larger. In fact, the output and carry from position 7 is determined in part by the inputs of position 0. The signal must traverse all the full adders, with a corresponding delay as a result.

There are intermediate solutions between the two extreme ones we have seen so far (i.e. a combinatorial circuit for the entire (say) 32-bit adder, and an iterative combinatorial circuit whose elements are one-bit adders built as ordinary combinatorial circuits). We can for instance build an 8-bit adder as an ordinary two-level combinatorial circuit and build a 32-bit adder from four such 8-bit adders. An 8-bit adder can trivially be build from 65536 (216) and-gates, and a giant 65536-input or-gate.

Parallel multipliers are well known building blocks used in digital signal processors as well as in data processors and graphic accelerators. However, every multiplication can be replaced by shift and add operations. That is why, the adders are the most important building blocks used in DSP’s and microprocessors. The constraints they have to fulfill are area, power and speed. The adder cell is an elementary unit in multipliers and dividers. The aim of this section is to provide a method to find the computational power by starting from the type of adder. There are many types of adders but generally they can be divided in four main classes:

• The starting point for any type of adder is a full-adder FA. An example of a full adder in CMOS is shown in fig.9. The discussion for this adder can be generalized for every type of adders. The outputs SUM and CARRY depend on the inputs a, b and c as:

In a multiplier we are using parallel-series connections of full-adders to make a B bits adder with m inputs. In the following paragraph we make the assumption that every full-adder is loaded with another full-adder. Another assumption will be that the adder

The reason to choose for ripple carry adders consists in their power efficiency [15] when compared to the other types of adders. Making an n bit ripple carry adder from 1 bit adders yields a propagation of the CARRY signal through the adder. Because the CARRY ripples through the stages, the SUM of the last bit is performed only when the CARRY of the previous section has been evaluated. Rippling will give extra power overhead and speed reduction but still, the RCA adders are the best in terms of power consumption.

In [15] a method to find the power dissipated by a B bits wide carry-ripple adder has been introduced. Denote Ea the mean value of the energy needed by the adder when the input a is constant and the other two inputs b and c are changed. This energy has been averaged after considering the transition diagram with all possible transitions of the variables. Eb and Ec are being defined in a similar way. Denote Eab the mean value of the energy needed by the full-adder when the two inputs a and b are kept constant and the input c is changing. By analogy we can define Eac and Ebc. The energy terms Ea, Eb, Ec and Eab, Eac, Ebc respectively depend on technology and the layout of the full-adder. Composing a m bits adder from a full-adder can be done in an iterative way as shown in fig.10. At a given moment, the inputs A[k] and B[k] are stable. After this moment every SUM output is being computed by taking into account the ripple through CARRY. The probability to get a transition on the CARRY output after the first full adder FA is ?. After the second FA the conditional probability to have a transition is ? and so on.

By using the same reasoning, the probability at the CARRY[m] is 1/2m-1. The inputs A[k] and B[k] are stable and we have to take into consideration only the energy Eab. For the first full-adder, when the inputs A[1] and B[1] are applied, the CARRY output is changing. The first adder contributes with Ec to the total energy. The bit k contributes with energy E[k] given by:

The total energy dissipated by the m bits carry-ripple adder can be found by summing all contributions of the bits k. Hence, we get the total energy as a function of mean values of the basic energies of a full-adder FA:

For large values of m eq.(2) can be approximated with the first term. This result can be used to compose cascade adders.

To add m words of B bits length we can cascade adders of the type shown in fig.10. The result is illustrated in fig.11. We can assume statistical independence between the SUM and CARRY propagation in the following calculation. The SUM propagates in the direction l and the CARRY propagates in the direction k.

The energy needed for the SUM propagation is Eac and for the CARRY propagation Eab. Supplying the operands at the input b, the energy consumed at the bit (k,l) can be obtained from eq. (1):

The total energy of the cascade adder is a sum of the energies needed by the individual bits and can be found by summing E(k,l) over k and l as shown in eq.(4).

When the number of bits B equals the number of words m eq.(5) shows the dependence of power on the number of bits squared as explained earlier in the computational power. This shows how the total energy of the cascade adder can be related to the energy consumption of the basic building element, the full-adder FA. Now composing functions at higher level multiplication-like is possible.

### Chain versus tree implementations of adders

In ripple through carry type of adders, a node can have multiple unwanted transitions in a single clock cycle before settling to its final value. Glitches increase the power consumption. For power-effective designs they have to be eliminated or, when this is not possible, at least limited in number. One of the most effective way of minimizing the number of glitches consists of balancing all signal paths in the circuit and reducing the logic depth. Fig.12 shows the tree and the chain implementation of an adder. For the chain circuit shown in fig.12 (a) we have the following behavior. While adder 1 computes a1+a2, adder 2 computes (a1+a2)+a3 with the old value of a1+a2. After the new value of a1+a2 has been propagated through adder 1, adder2 recomputes the correct sum (a1+a2)+a3. Hence, a glitch originates at the output of adder 2 if there is a change in the value of a1+a2. At the output of adder 3 a worse situation may occur.

Generalizing for an N stage adder, it is easy to show that in the worst-case, the output will have N extra transitions and the total number of extra transitions for all N stages increases to N(N+1)/2. In reality, the transition activity due to glitching transitions will be less since the worst input pattern will occur infrequently. In the tree adder case shown in fig.12(b), paths are balanced; therefore the number of glitches is reduced.

In conclusion, by increasing the logic depth, the number of spurious transitions due to glitching increases. Decreasing the logic depth, the amount of glitches will be reduced, making possible to speed up the circuit and enabling some voltage down-scaling while the throughput is fixed. On the other hand, decreasing the logic depth, increases the number of registers required by the design, adding some extra power consumption due to registers. The choice of augmenting or reducing the logic depth of an architecture is based on a trade-off between the minimization of glitching power and the increase of power due to registers.

The block diagram of 3-bit Ultra fast adder is shown in fig 13, which is composed of three multiplexer circuits with six full-adder circuit.

The full-adder circuit is divided into two regions as top and bottom layer. The inputs A, B and Cin are given to the corresponding full adder where the sum output of top layer and bottom layer are given as input to the corresponding multiplexer.

The Selection line of multiplexer is based on the signal SEL and the output will be reflected at the multiplexer either from the upper or lower full-adder circuit for both sum and carry.

### Binary subtraction

Binary Subtraction can bee done, notice that in order to compute the expression x – y, instead can compute the expression x + -y. We know from the section on binary arithmetic how to negate a number by inverting all the bits and adding 1. Thus, we can compute the expression as x + inv(y) + 1. It suffices to invert all the inputs of the second operand before they reach the adder, but how do we add the 1. That seems to require another adder just for that. Luckily, we have an unused carry-in signal to position 0 that we can use. Giving a 1 on this input in effect adds one to the result. The complete circuit with addition and subtraction looks like this:

### Binary multiplication and division

Binary multiplication is even harder than binary addition. There is no good iterative combinatorial circuit available, so we have to use even heavier artillery. The solution is going to be to use a sequential circuit that computes one addition for every clock pulse.

### Binary Comparator

The purpose of a two-bit binary comparator is quite simple, which has a comparison unit for receiving a first bit and a second bit to thereby compare the first bit with the second bit; and an enable unit for outputting a comparison result of the comparison unit as an output of the 2-bit binary comparator according to an enable signal.. It determines whether one 2-bit input number is larger than, equal to, or less than the other.

### NEED FOR TESTING

As the density of VLSI products increases, their testing becomes more difficult and costly. Generating test patterns has shifted from a deterministic approach, in which a testing pattern is generated automatically based on a fault model and an algorithm, to a random selection of test signals. While in real estate the refrain is “Location!” the comparable advice in IC design should be “Testing! Testing! Testing!”. No matter whether deterministic or random generation of testing patterns is used, the testing pattern applied to the VLSI chips can no longer cover all possible defects. Consider the manufacturing processes for VLSI chips as shown in Fig. 1. Two kinds of cost can incur with the test process: the cost of testing and the cost of accepting an imperfect chip. The first cost is a function of the time spent on testing or, equivalently, the number of test patterns applied to the chip. The cost will add to the cost of the chips themselves. The second cost represents the fact that, when a defective chip has been passed as good, its failure may become very costly after being embedded in its application. An optimal testing strategy should trade off both costs and determine an adequate test length (in terms of testing period or number of test patterns).

Apart from the cost, two factors need to be considered when determining the test lengths. The first is the production yield, which is the probability that a product is functionally correct at the end of the manufacturing process. If the yield is high, we may not need to test extensively since most chips tested will be “good,” and vice versa. The other factor to be considered is the coverage function of the test process. The coverage function is defined as the probability of detecting a defective chip given that it has been tested for a particular duration or a given number of test patterns. If we assume that all possible defects can be detected by the test process, the coverage function of the test process can be regarded as a probability distribution function of the detection time given the chip under test is bad. Thus, by investigating the density function or probability mass function, we should be able to calculate the marginal gain in detection if the test continues. In general, the coverage function of a test process can be obtained through theoretical analysis or experiments on simulated fault models. With a given production yield, the fault coverage requirement to attain a specified defect level, which is defined as the probability of having a “bad” chip among all chips passed by a test process While most problems in VLSI design has been reduced to algorithm in readily available software, the responsibilities for various levels of testing and testing methodology can be significant burden on the designer.

The yield of a particular IC was the number of good die divided by the total number of die per wafer. Due to the complexity of the manufacturing process not all die on a wafer correctly operate. Small imperfections in starting material, processing steps, or in photomasking may result in bridged connections or missing features. It is the aim of a test procedure to determine which die are good and should be used in end systems.

• Testing a die can occur:
• At the wafer level
• At the packaged level
• At the board level
• At the system level
• In the field

Obviously, if faults can be detected at the wafer level, the cost of manufacturing is kept the lowest. In some circumstances, the cost to develop adequate tests at the wafer level, mixed signal requirements or speed considerations may require that further testing be done at the packaged-chip level or the board level. A component vendor can only test the wafer or chip level. Special systems, such as satellite-borne electronics, might be tested exhaustively at the system level.

Tests may fall into two main categories. The first set of tests verifies that the chip performs its intended function; that is, that it performs a digital filtering function, acts as a microprocessor, or communicates using a particular protocol. In other words, these tests assert that all the gates in the chip, acting in concert, achieve a desired function. These tests are usually used early in the design cycle to verify the functionality of the circuit. These will be called functionality tests. The second set of tests verifies that every gate and register in the chip functions correctly. These tests are used after the chip is manufactured to verify that the silicon in intact. They will be called manufacturing tests. In many cases these two sets of tests may be one and the same, although the natural flow of design usually has a designer considering function before manufacturing concerns.

### MANUFACTURING TEST PRINCIPLES

A critical factor in all LSI and VLSI design is the need to incorporate methods of testing circuits. This task should proceed concurrently with any architectural considerations and not be left until fabricated parts are available.

Figure 5.1(a) shows a combinational circuit with n-inputs. To test this circuit exhaustively a sequence of 2^n inputs must be applied and observed to fully exercise the circuit. This combinational circuit is converted to a sequential circuit with addition of m-storage registers, as shown in figure 5.1b the state of the circuit is determined by the inputs and the previous state. A minimum of 2^ (n+m) test vectors must be applied to exhaustively test the circuit. Clearly, this is an important area of design that has to be well understood.

### OPTIMAL TESTING

With the increased complexity of VLSI circuits, testing has become more costly and time-consuming. The design of a testing strategy, which is specified by the testing period based on the coverage function of the testing algorithm, involves trading off the cost of testing and the penalty of passing a bad chip as good. The optimal testing period is first derived, assuming the production yield is known. Since the yield may not be known a priori, an optimal sequential testing strategy which estimates the yield based on ongoing testing results, which in turn determines the optimal testing period, is developed next. Finally, the optimal sequential testing strategy for batches in which N chips are tested simultaneously is presented. The results are of use whether the yield stays constant or varies from one manufacturing run to another.

### VLSI DESIGN

VLSI Design can be classified based on the Prototype,

• ASIC
• FPGA

### ASIC- Application Specific Integrated Circuits.

An application-specific integrated circuit (ASIC) is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use. For example, a chip designed solely to run a cell phone is an ASIC. Intermediate between ASICs and industry standard integrated circuits, like the 7400 or the 4000 series, are application specific standard products (ASSPs).

As feature sizes have shrunk and design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 gates to over 100 million. Modern ASICs often include entire 32-bit processors, memory blocks including ROM, RAM, EEPROM, Flash and other large building blocks. Such an ASIC is often termed a SoC (system-on-a-chip). Designers of digital ASICs use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs.

• Power and flexibility
• Device- Independent design
• Portability
• Benchmarking capabilities
• ASIC migration
• Quick time-to-market
• and low cost Power and Flexibility VHDL has powerful language constructs with which we can write descriptions of complex control logic very easily.

Did you like this example?

The art and science. (2017, Jun 26). Retrieved December 4, 2022 , from
https://studydriver.com/the-art-and-science/

Save time with Studydriver!

Get in touch with our top writers for a non-plagiarized essays written to satisfy your needs

Get custom essay

## Stuck on ideas? Struggling with a concept?

A professional writer will make a clear, mistake-free paper for you!

##### Leave your email and we will send a sample to you.
Stop wasting your time searching for samples!
You can find a skilled professional who can write any paper for you.
Get unique paper

Hi!
I'm Chatbot Amy :)