The Art and Science

Check out more papers on Computer Engineering Computer Design

Introduction

Electronics is the art and science of getting electrons to move in the way we want to do useful work. An electron is a sub-atomic particle that carries a charge of energy. Each electron is so small that it can carry only a tiny bit of energy. The charge of electrons is measured in coulombs. One coulomb is the charge carried by 6,250,000,000,000,000,000 electrons. That's 6.25 X 1018 electrons for you math whizzes. Every good electronics text begins with this definition. The term coulomb is rarely used past page 3 of electronics texts and almost never in the actual building of electronic circuits.

Electrons are a component of atoms. There are about 100 different kinds of atoms. Each different kind is called an element. Some elements have structures that hold tight to their electrons so that it is very hard to make the electrons move.

In science, technology, business, and, in fact, most other fields of endeavor, we are constantly dealing with quantities. Quantities are measured, monitored, recorded, manipulated arithmetically, observed, or in some other way utilized in most physical systems. It is important when dealing with various quantities that we be able to represent their values efficiently and accurately. There are basically two ways of representing the numerical value of quantities: analog and digital

Analogue/Analog[4] electronics are those electronic systems with a continuously variable signal. In contrast, in digital electronics signals usually take only two different levels. In analog representation a quantity is represented by a voltage, current, or meter movement that is proportional to the value of that quantity. Analog quantities such as those cited above have n important characteristic: they can vary over a continuous range of values.

In digital representation the quantities are represented not by proportional quantities but by symbols called digits. As an example, consider the digital watch, which provides the time of day in the form of decimal digits which represent hours and minutes (and sometimes seconds). As we know, the time of day changes continuously, but the digital watch reading does not change continuously; rather, it changes in steps of one per minute (or per second). In other words, this digital representation of the time of day changes in discrete steps, as compared with the representation of time provided by an analog watch, where the dial reading changes continuously.

Digital electronics that deals with "1s and 0s", but that's a vast oversimplification of the in and outs of "going digital. Digital electronics operates on the premise that all signals have two distinct levels. Depending on what types of devices are in use, the levels may be certain voltages or voltage ranges near the power supply level and ground. The meaning of those signal levels depends on the circuit design, so don't mix the logical meaning with the physical signal. Here are some common terms used in digital electronics:

  • Logical-refers to a signal or device in terms of its meaning, such as "TRUE" or "FALSE"
  • Physical-refers to a signal in terms of voltage or current or a device's physical characteristics
  • HIGH-the signal level with the greater voltage
  • LOW-the signal level with the lower voltage
  • TRUE or 1-the signal level that results from logic conditions being met
  • FALSE or 0-the signal level that results from logic conditions not being met
  • Active High-a HIGH signal indicates that a logical condition is occurring
  • Active Low-a LOW signal indicates that a logical condition is occurring

Number Systems

Digital logic may work with "1s and 0s", but it combines them into several different groupings that form different number systems. Most of are familiar with the decimal system, of course. That's a base-10 system in which each digit represents a power of ten. There are some other number system representations,

  • Binary-base two (each bit represents a power of two), digits are 0 and 1, numbers are denoted with a 'B' or 'b' at the end, such as 01001101B (77 in the decimal system)
  • Hexadecimal or 'Hex'-base 16 (each digit represents a power of 16), digits are 0 through 9 plus A-B-C-D-E-F representing 10-15, numbers are denoted with '0x' at the beginning or 'h' at the end, such as 0x5A or 5Ah (90 in the decimal system) and require four binary bits each. A dollar sign preceding the number ($01BE) is sometimes used, as well.
  • Binary-coded decimal or BCD-a four-bit number similar to hexadecimal, except that the decimal value of the number is limited to 0-9.
  • Decimal-the usual number system. When used in combination with other numbering systems, decimal numbers are denoted with 'd' at the end, such as 23d.
  • Octal-base eight (each digit represents a power of 8), digits are 0-7, and each requires three bits. Rarely used in modern designs.

Digital Construction Techniques

Building digital circuits is somewhat easier than for analog circuits-there are fewer components and the devices tend to be in similarly sized packages. Connections are less susceptible to noise. The trade-off is that there can be many connections, so it is easy to make a mistake and harder to find them. Due to the uniform packages, there are fewer visual clues.

Prototyping Boards

Prototypes is nothing but putting together some temporary circuits, or, as part of the exercises using a common workbench accessory known as a prototyping board. A typical board is shown in below figure with a DIP packaged IC plugged into the board across the center gap. The board consists of sets of sockets in rows that are connected together so that component leads can be plugged in and connected without soldering. The long rows of sockets on the outside edges of the board are also connected together and these are generally used for power supply and ground connections that are common to many components.

Try to be very systematic in assembling your wiring layout on the prototype board, laying out the components approximately as shown on the schematic diagram.

Reading Pin Connections

IC pins are almost always arranged so that pin 1 is in a corner or by an identifying mark on the IC body and the sequence increases in a counter-clockwise sequence looking down on the IC or "chip" as shown in Figure 1. For most DIP packages, the identifying mark is a semi-circular depression in the middle of one end of the package or a round pit or dot in the corner marking pin 1. Both are shown in the figure, but only one is likely to be used on any given IC. When in doubt, the manufacturer of an IC will have a drawing on the data sheet and those can usually be found by entering "[part number] data sheet" into an Internet search engine.

Powering Digital Logic

Where analog electronics is usually somewhat flexible in its power requirements and tolerant of variations in power supply voltage, digital logic is not nearly so carefree. Whatever logic family you choose, you will need to regulate the power supply voltages to at least A±5 percent, with adequate filter capacitors to filter out sharp sags or spikes.

Logic devices depend on stable power supply voltages to provide references to the internal electronics that sense the high or low voltages and act on them as logic signals. If the power supply voltage is not well regulated or if the device's ground voltage is not kept close to 0 V, then the device can become confused and misinterpret the inputs, causing unexpected or temporary changes in signals known as glitches. These can be very hard to troubleshoot, so insuring that the power supply is clean is well worth the effort. A good technique is to connect a 10 ~ 100 AµF electrolytic or tantalum capacitor and a 0.1 AµF ceramic capacitor in parallel across the power supply connections on your prototyping board.

In computational devices, for example computers and digital signal processing elements, a fast parallel binary adder is essential. In many cases, to obtain sum and carry within one clock cycle is important. In the case of a pipeline, the latency of the pipeline is often expected to be as small as possible. The speed limitation of a parallel binary adder comes from its carry propagation delay. The maximum carry propagation delay is the delay of its overflow carry. In a ripple carry adder, the evaluation time T for the overflow carry is equal to the product of the delay T1 of each single-bit carry evaluation stage and the total bit number n. In order to improve speed, carry look ahead strategies are widely used.

Since then, such a tree is often combined with other techniques, for example with carry select and Manchester carry chain, as described in the'article "A spanning tree carry look ahead adder by Thomas Lynch and Earl E. Swartzlander, Jr., IEEE Transactions on Computers, vol. 41, pp. 931-939, August 1992. In their strategy, a 64-bit adder is divided into eight 8-bit adders, seven of them are carry-selected, so the visible levels of the binary carry tree are reduced. The visible levels are further reduced by using sixteen, four and two 4-bit Manchester carry chain modules in the first, the second and the third levels respectively. Finally, seven carries are obtained from the carry tree for the seven 8-bit adders to select their SUMs. In this solution, the true level number is hidden by the 4-bit Manchester module which is equivalent to two levels in a binary tree. The non-uniformity of the internal loading still exists but is hidden by the high radix, for example the fan outs of four Manchester modules in the second level are 1, 2, 3 and 4 respectively.

An object of the invention is to provide a parallel binary adder architecture which offers a superior speed, a uniform loading, a regular layout and a flexible configuration in the trade-off between speed, power and area compared with existing parallel binary adder architectures. Another object of the invention is to provide an advanced CMOS circuit technique which-offers an ultrafast speed particularly for a one-clock cycle decision. The combination of the two objects offers a very high performance parallel binary adder.

The first object of the invention is achieved with the invented Distributed-Binary-Look ahead Carry (DBLC) adder architecture which is an arrangement of the kind set forth in the characterising clause of Claim 1. The second object is achieved by the invented clock-and-data precharged dynamic CMOS circuit technique which is an arrangement of the kind set forth in the characterising clause of Claim 2. Further features and further developments of the invented arrangements are set forth in other characterising clauses of section.

A carry look-ahead adder capable of adding or subtracting two input signals includes first stage logic having a plurality of carry-create and carry-transmit logic circuits each coupled to receive one or more bits of each input signal. Each carry-create circuit generates a novel carry-create signal in response to corresponding first bit-pairings of the input signals, and each carry-transmit circuit generates a novel carry-transmit signal in response to corresponding second bit-pairings of the input signals. The carry-create and carry-transmit signals are combined in carry look-ahead logic to generate accumulated carry-create signals, which are then used to select final sum bits.

Binary Arithmetic-Circuits

Binary arithmetic is a combinatorial problem. It may seem trivial to use the methods we have already seen for designing combinatorial circuits to obtain circuits for binary arithmetic.

However, there is a problem. It turns out that the normal way of creating such circuits would often use up way too many gates. We must search for different ways.

Adder Circuits

In electronics, Addition or adder or summer is the most commonly performed arithmetic operations in digital systems .An adder combines two arithmetic operands using the addition rules that performs addition of numbers. In modern computers adders reside in the arithmetic logic unit (ALU) where other operations are performed. Although adders can be constructed for many numerical representations, such as Binary-coded decimal or excess-3, the most common adders operate on binary numbers. In cases where twos complement or ones complement is being used to represent negative numbers, it is trivial to modify an adder into an adder-subtracter. Other signed number representations require a more complex adder.

Types of adders

Adder circuits can be classified as,

  • A Half Adder
  • A Full Adder

A half adder can add two bits. It has two inputs, generally labeled A and B, and two outputs, the sum S and carry C. S is the two-bit XOR of A and B, and C is the AND of A and B. Essentially the output of a half adder is the sum of two one-bit numbers, with C being the most significant of these two outputs.

A full adder is a combinatorial circuit (or actually two combinatorial circuits) of three inputs and two outputs. Its function is to add two binary digits plus a carry from the previous position, and give a two-bit result, the normal output and the carry to the next position. Here, we have used variable names x and y for the inputs, c-in for the carry-in, s for the sum output and c-out for the carry-out.

As you can see, the depth of this circuit is no longer two, but considerably larger. In fact, the output and carry from position 7 is determined in part by the inputs of position 0. The signal must traverse all the full adders, with a corresponding delay as a result.

There are intermediate solutions between the two extreme ones we have seen so far (i.e. a combinatorial circuit for the entire (say) 32-bit adder, and an iterative combinatorial circuit whose elements are one-bit adders built as ordinary combinatorial circuits). We can for instance build an 8-bit adder as an ordinary two-level combinatorial circuit and build a 32-bit adder from four such 8-bit adders. An 8-bit adder can trivially be build from 65536 (216) and-gates, and a giant 65536-input or-gate.

Adders and computational power

Parallel multipliers are well known building blocks used in digital signal processors as well as in data processors and graphic accelerators. However, every multiplication can be replaced by shift and add operations. That is why, the adders are the most important building blocks used in DSP's and microprocessors. The constraints they have to fulfill are area, power and speed. The adder cell is an elementary unit in multipliers and dividers. The aim of this section is to provide a method to find the computational power by starting from the type of adder. There are many types of adders but generally they can be divided in four main classes:

  • Ripple carry adders (RCA);
  • Carry select adders (CS);
  • Carry look-ahead adders (CLA);
  • Conditional sum adders (CSA).
  • The starting point for any type of adder is a full-adder FA. An example of a full adder in CMOS is shown in fig.9. The discussion for this adder can be generalized for every type of adders. The outputs SUM and CARRY depend on the inputs a, b and c as:

In a multiplier we are using parallel-series connections of full-adders to make a B bits adder with m inputs. In the following paragraph we make the assumption that every full-adder is loaded with another full-adder. Another assumption will be that the adder

Ripple carry adders (RCA)

The reason to choose for ripple carry adders consists in their power efficiency [15] when compared to the other types of adders. Making an n bit ripple carry adder from 1 bit adders yields a propagation of the CARRY signal through the adder. Because the CARRY ripples through the stages, the SUM of the last bit is performed only when the CARRY of the previous section has been evaluated. Rippling will give extra power overhead and speed reduction but still, the RCA adders are the best in terms of power consumption.

In [15] a method to find the power dissipated by a B bits wide carry-ripple adder has been introduced. Denote Ea the mean value of the energy needed by the adder when the input a is constant and the other two inputs b and c are changed. This energy has been averaged after considering the transition diagram with all possible transitions of the variables. Eb and Ec are being defined in a similar way. Denote Eab the mean value of the energy needed by the full-adder when the two inputs a and b are kept constant and the input c is changing. By analogy we can define Eac and Ebc. The energy terms Ea, Eb, Ec and Eab, Eac, Ebc respectively depend on technology and the layout of the full-adder. Composing a m bits adder from a full-adder can be done in an iterative way as shown in fig.10. At a given moment, the inputs A[k] and B[k] are stable. After this moment every SUM output is being computed by taking into account the ripple through CARRY. The probability to get a transition on the CARRY output after the first full adder FA is ?. After the second FA the conditional probability to have a transition is ? and so on.

RCA m bits adder

By using the same reasoning, the probability at the CARRY[m] is 1/2m-1. The inputs A[k] and B[k] are stable and we have to take into consideration only the energy Eab. For the first full-adder, when the inputs A[1] and B[1] are applied, the CARRY output is changing. The first adder contributes with Ec to the total energy. The bit k contributes with energy E[k] given by:

The total energy dissipated by the m bits carry-ripple adder can be found by summing all contributions of the bits k. Hence, we get the total energy as a function of mean values of the basic energies of a full-adder FA:

For large values of m eq.(2) can be approximated with the first term. This result can be used to compose cascade adders.

Cascade adders

To add m words of B bits length we can cascade adders of the type shown in fig.10. The result is illustrated in fig.11. We can assume statistical independence between the SUM and CARRY propagation in the following calculation. The SUM propagates in the direction l and the CARRY propagates in the direction k.

The energy needed for the SUM propagation is Eac and for the CARRY propagation Eab. Supplying the operands at the input b, the energy consumed at the bit (k,l) can be obtained from eq. (1):

The total energy of the cascade adder is a sum of the energies needed by the individual bits and can be found by summing E(k,l) over k and l as shown in eq.(4).

When the number of bits B equals the number of words m eq.(5) shows the dependence of power on the number of bits squared as explained earlier in the computational power. This shows how the total energy of the cascade adder can be related to the energy consumption of the basic building element, the full-adder FA. Now composing functions at higher level multiplication-like is possible.

Chain versus tree implementations of adders

In ripple through carry type of adders, a node can have multiple unwanted transitions in a single clock cycle before settling to its final value. Glitches increase the power consumption. For power-effective designs they have to be eliminated or, when this is not possible, at least limited in number. One of the most effective way of minimizing the number of glitches consists of balancing all signal paths in the circuit and reducing the logic depth. Fig.12 shows the tree and the chain implementation of an adder. For the chain circuit shown in fig.12 (a) we have the following behavior. While adder 1 computes a1+a2, adder 2 computes (a1+a2)+a3 with the old value of a1+a2. After the new value of a1+a2 has been propagated through adder 1, adder2 recomputes the correct sum (a1+a2)+a3. Hence, a glitch originates at the output of adder 2 if there is a change in the value of a1+a2. At the output of adder 3 a worse situation may occur.

Generalizing for an N stage adder, it is easy to show that in the worst-case, the output will have N extra transitions and the total number of extra transitions for all N stages increases to N(N+1)/2. In reality, the transition activity due to glitching transitions will be less since the worst input pattern will occur infrequently. In the tree adder case shown in fig.12(b), paths are balanced; therefore the number of glitches is reduced.

In conclusion, by increasing the logic depth, the number of spurious transitions due to glitching increases. Decreasing the logic depth, the amount of glitches will be reduced, making possible to speed up the circuit and enabling some voltage down-scaling while the throughput is fixed. On the other hand, decreasing the logic depth, increases the number of registers required by the design, adding some extra power consumption due to registers. The choice of augmenting or reducing the logic depth of an architecture is based on a trade-off between the minimization of glitching power and the increase of power due to registers.

ULTRA FAST ADDER

The block diagram of 3-bit Ultra fast adder is shown in fig 13, which is composed of three multiplexer circuits with six full-adder circuit.

The full-adder circuit is divided into two regions as top and bottom layer. The inputs A, B and Cin are given to the corresponding full adder where the sum output of top layer and bottom layer are given as input to the corresponding multiplexer.

The Selection line of multiplexer is based on the signal SEL and the output will be reflected at the multiplexer either from the upper or lower full-adder circuit for both sum and carry.

Binary subtraction

Binary Subtraction can bee done, notice that in order to compute the expression x - y, instead can compute the expression x + -y. We know from the section on binary arithmetic how to negate a number by inverting all the bits and adding 1. Thus, we can compute the expression as x + inv(y) + 1. It suffices to invert all the inputs of the second operand before they reach the adder, but how do we add the 1. That seems to require another adder just for that. Luckily, we have an unused carry-in signal to position 0 that we can use. Giving a 1 on this input in effect adds one to the result. The complete circuit with addition and subtraction looks like this:

Binary multiplication and division

Binary multiplication is even harder than binary addition. There is no good iterative combinatorial circuit available, so we have to use even heavier artillery. The solution is going to be to use a sequential circuit that computes one addition for every clock pulse.

Binary Comparator

The purpose of a two-bit binary comparator is quite simple, which has a comparison unit for receiving a first bit and a second bit to thereby compare the first bit with the second bit; and an enable unit for outputting a comparison result of the comparison unit as an output of the 2-bit binary comparator according to an enable signal.. It determines whether one 2-bit input number is larger than, equal to, or less than the other.

NEED FOR TESTING

As the density of VLSI products increases, their testing becomes more difficult and costly. Generating test patterns has shifted from a deterministic approach, in which a testing pattern is generated automatically based on a fault model and an algorithm, to a random selection of test signals. While in real estate the refrain is "Location!" the comparable advice in IC design should be "Testing! Testing! Testing!". No matter whether deterministic or random generation of testing patterns is used, the testing pattern applied to the VLSI chips can no longer cover all possible defects. Consider the manufacturing processes for VLSI chips as shown in Fig. 1. Two kinds of cost can incur with the test process: the cost of testing and the cost of accepting an imperfect chip. The first cost is a function of the time spent on testing or, equivalently, the number of test patterns applied to the chip. The cost will add to the cost of the chips themselves. The second cost represents the fact that, when a defective chip has been passed as good, its failure may become very costly after being embedded in its application. An optimal testing strategy should trade off both costs and determine an adequate test length (in terms of testing period or number of test patterns).

Apart from the cost, two factors need to be considered when determining the test lengths. The first is the production yield, which is the probability that a product is functionally correct at the end of the manufacturing process. If the yield is high, we may not need to test extensively since most chips tested will be "good," and vice versa. The other factor to be considered is the coverage function of the test process. The coverage function is defined as the probability of detecting a defective chip given that it has been tested for a particular duration or a given number of test patterns. If we assume that all possible defects can be detected by the test process, the coverage function of the test process can be regarded as a probability distribution function of the detection time given the chip under test is bad. Thus, by investigating the density function or probability mass function, we should be able to calculate the marginal gain in detection if the test continues. In general, the coverage function of a test process can be obtained through theoretical analysis or experiments on simulated fault models. With a given production yield, the fault coverage requirement to attain a specified defect level, which is defined as the probability of having a "bad" chip among all chips passed by a test process While most problems in VLSI design has been reduced to algorithm in readily available software, the responsibilities for various levels of testing and testing methodology can be significant burden on the designer.

The yield of a particular IC was the number of good die divided by the total number of die per wafer. Due to the complexity of the manufacturing process not all die on a wafer correctly operate. Small imperfections in starting material, processing steps, or in photomasking may result in bridged connections or missing features. It is the aim of a test procedure to determine which die are good and should be used in end systems.

  • Testing a die can occur:
  • At the wafer level
  • At the packaged level
  • At the board level
  • At the system level
  • In the field

Obviously, if faults can be detected at the wafer level, the cost of manufacturing is kept the lowest. In some circumstances, the cost to develop adequate tests at the wafer level, mixed signal requirements or speed considerations may require that further testing be done at the packaged-chip level or the board level. A component vendor can only test the wafer or chip level. Special systems, such as satellite-borne electronics, might be tested exhaustively at the system level.

Tests may fall into two main categories. The first set of tests verifies that the chip performs its intended function; that is, that it performs a digital filtering function, acts as a microprocessor, or communicates using a particular protocol. In other words, these tests assert that all the gates in the chip, acting in concert, achieve a desired function. These tests are usually used early in the design cycle to verify the functionality of the circuit. These will be called functionality tests. The second set of tests verifies that every gate and register in the chip functions correctly. These tests are used after the chip is manufactured to verify that the silicon in intact. They will be called manufacturing tests. In many cases these two sets of tests may be one and the same, although the natural flow of design usually has a designer considering function before manufacturing concerns.

MANUFACTURING TEST PRINCIPLES

A critical factor in all LSI and VLSI design is the need to incorporate methods of testing circuits. This task should proceed concurrently with any architectural considerations and not be left until fabricated parts are available.

Figure 5.1(a) shows a combinational circuit with n-inputs. To test this circuit exhaustively a sequence of 2^n inputs must be applied and observed to fully exercise the circuit. This combinational circuit is converted to a sequential circuit with addition of m-storage registers, as shown in figure 5.1b the state of the circuit is determined by the inputs and the previous state. A minimum of 2^ (n+m) test vectors must be applied to exhaustively test the circuit. Clearly, this is an important area of design that has to be well understood.

OPTIMAL TESTING

With the increased complexity of VLSI circuits, testing has become more costly and time-consuming. The design of a testing strategy, which is specified by the testing period based on the coverage function of the testing algorithm, involves trading off the cost of testing and the penalty of passing a bad chip as good. The optimal testing period is first derived, assuming the production yield is known. Since the yield may not be known a priori, an optimal sequential testing strategy which estimates the yield based on ongoing testing results, which in turn determines the optimal testing period, is developed next. Finally, the optimal sequential testing strategy for batches in which N chips are tested simultaneously is presented. The results are of use whether the yield stays constant or varies from one manufacturing run to another.

VLSI DESIGN

VLSI Design can be classified based on the Prototype,

  • ASIC
  • FPGA

ASIC- Application Specific Integrated Circuits.

An application-specific integrated circuit (ASIC) is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use. For example, a chip designed solely to run a cell phone is an ASIC. Intermediate between ASICs and industry standard integrated circuits, like the 7400 or the 4000 series, are application specific standard products (ASSPs).

As feature sizes have shrunk and design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 gates to over 100 million. Modern ASICs often include entire 32-bit processors, memory blocks including ROM, RAM, EEPROM, Flash and other large building blocks. Such an ASIC is often termed a SoC (system-on-a-chip). Designers of digital ASICs use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs.

Fig 20.ASIC DESIGN FLOW FPGA Field Programmable Gate Array Field-programmable gate array (FPGA) technology continues to gain momentum, and the worldwide FPGA market is expected to grow from $1.9 billion in 2005 to $2.75 billion by 20101. Since its invention by Xilinx in 1984, FPGAs have gone from being simple glue logic chips to actually replacing custom application-specific integrated circuits (ASICs) and processors for signal processing and control applications What is an FPGA? At the highest level, FPGAs are reprogrammable silicon chips. Using prebuilt logic blocks and programmable routing resources, you can configure these chips to implement custom hardware functionality without ever having to pick up a breadboard or soldering iron. You develop digital computing tasks in software and compile them down to a configuration file or bitstream that contains information on how the components should be wired together. In addition, FPGAs are completely reconfigurable and instantly take on a brand new "personality" when you recompile a different configuration of circuitry. In the past, FPGA technology was only available to engineers with a deep understanding of digital hardware design. The rise of high-level design tools, however, is changing the rules of FPGA programming, with new technologies that convert graphical block diagrams or even C code into digital hardware circuitry. FPGA chip adoption across all industries is driven by the fact that FPGAs combine the best parts of ASICs and processor-based systems. FPGAs provide hardware-timed speed and reliability, but they do not require high volumes to justify the large upfront expense of custom ASIC design. Reprogrammable silicon also has the same flexibility of software running on a processor-based system, but it is not limited by the number of processing cores available. Unlike processors, FPGAs are truly parallel in nature so different processing operations do not have to compete for the same resources. Each independent processing task is assigned to a dedicated section of the chip, and can function autonomously without any influence from other logic blocks. As a result, the performance of one part of the application is not affected when additional processing is added. Benefits of FPGA Technology 1. Performance 2. Time to Market 3. Cost 4. Reliability 5. Long-Term Maintenance Performance - Taking advantage of hardware parallelism, FPGAs exceed the computing power of digital signal processors (DSPs) by breaking the paradigm of sequential execution and accomplishing more per clock cycle. BDTI, a noted analyst and benchmarking firm, released benchmarks showing how FPGAs can deliver many times the processing power per dollar of a DSP solution in some applications. Controlling inputs and outputs (I/O) at the hardware level provides faster response times and specialized functionality to closely match application requirements. Time to market - FPGA technology offers flexibility and rapid prototyping capabilities in the face of increased time-to-market concerns. You can test an idea or concept and verify it in hardware without going through the long fabrication process of custom ASIC design. You can then implement incremental changes and iterate on an FPGA design within hours instead of weeks. Commercial off-the-shelf (COTS) hardware is also available with different types of I/O already connected to a user-programmable FPGA chip. The growing availability of high-level software tools decrease the learning curve with layers of abstraction and often include valuable IP cores (prebuilt functions) for advanced control and signal processing. Cost - The nonrecurring engineering (NRE) expense of custom ASIC design far exceeds that of FPGA-based hardware solutions. The large initial investment in ASICs is easy to justify for OEMs shipping thousands of chips per year, but many end users need custom hardware functionality for the tens to hundreds of systems in development. The very nature of programmable silicon means that there is no cost for fabrication or long lead times for assembly. As system requirements often change over time, the cost of making incremental changes to FPGA designs are quite negligible when compared to the large expense of respinning an ASIC. Reliability - While software tools provide the programming environment, FPGA circuitry is truly a "hard" implementation of program execution. Processor-based systems often involve several layers of abstraction to help schedule tasks and share resources among multiple processes. The driver layer controls hardware resources and the operating system manages memory and processor bandwidth. For any given processor core, only one instruction can execute at a time, and processor-based systems are continually at risk of time-critical tasks pre-empting one another. FPGAs, which do not use operating systems, minimize reliability concerns with true parallel execution and deterministic hardware dedicated to every task. Fig 21. FPGA DESIGN FLOW Long-term maintenance - As mentioned earlier, FPGA chips are field-upgradable and do not require the time and expense involved with ASIC redesign. Digital communication protocols, for example, have specifications that can change over time, and ASIC-based interfaces may cause maintenance and forward compatibility challenges. Being reconfigurable, FPGA chips are able to keep up with future modifications that might be necessary. As a product or system matures, you can make functional enhancements without spending time redesigning hardware or modifying the board layout. Choosing an FPGA When examining the specifications of an FPGA chip, note that they are often divided into configurable logic blocks like slices or logic cells, fixed function logic such as multipliers, and memory resources like embedded block RAM. There are many other FPGA chip components, but these are typically the most important when selecting and comparing FPGAs for a particular application. Virtex-II 1000 Virtex-II 3000 Spartan-3 1000 Spartan-3 2000 Virtex-5 LX30 Virtex-5 LX50 Virtex-5 LX85 Virtex-5 LX110 Gates 1 million 3 million 1 million 2 million ----- ----- ----- ----- Flip-Flops 10,240 28,672 15,360 40,960 19,200 28,800 51,840 69,120 LUTs 10,240 28,672 15,360 40,960 19,200 28,800 51,840 69,120 Multipliers 40 96 24 40 32 48 48 64 Block RAM (kb) 720 1,728 432 720 1,152 1,728 3,456 4,608 Table 1. FPGA Resource Specifications for Various Families Table 1 shows resource specifications used to compare FPGA chips within various Xilinx families. The number of gates has traditionally been a way to compare the size of FPGA chips to ASIC technology, but it does not truly describe the number of individual components inside an FPGA. This is one of the reasons that Xilinx did not specify the number of equivalent system gates for the new Virtex-5 family. VHDL In this chapter we present the introduction to hardware design process using the hard ware description languages and VHDL. As the size and the complexity of digital system increase, more computer aided design tools are introduced into the hardware design process. The early paper-and-pencil design methods have given way to sophisticated design entry, verification and automatic hardware generation tools. The newest addition to this design methodology is the introduction of hardware description language (HDL), and a great deal of effort is being expended in their development. Actually, the use of this language is not new. Languages such as CDI, ISP and AHPL have been used for last some years. However, their primary application has been the verification of a design's architecture. They do not have the capability to model designs with a high degree of accuracy that is, their timing model is not precise and/or their language constructs imply a certain hardware structure. Newer languages such as HHDL, ISP and VHDL have more universal timing models and imply no particular hardware structure. A general way of design process using the HDLs is shown in Fig 22 Fig 22 Hardware design process Hardware description languages have two main applications, documenting a design and modeling it. Good documentation of a design helps to ensure design accuracy and design portability. Since a simulator supports them, the model inherent in an HDL description can be used to validate a design. Prototyping of complicated system is extremely expensive, and the goal of those concerned with the development of hardware languages is to replace this prototyping process with validation through simulation. Other uses of HDL models are test generation and silicon compilation. Use of VHDL tools in VLSI design: IC designers are always looking for way to increase their productivity without degrading the quality of their designs. So it is no wonder that they have embraced logic synthesis tools. In the few last years these tools have grown to be capable for producing design as good as a human designer. Now logic synthesis is helping to bring about a switch to design using a hardware description language to describe the structure and behavior of the circuits, as evidenced by the recent availability of logic synthesis tools using the very high-speed integrated circuit hardware description language (VHDL). Now logic synthesis tools can automatically produce a gate level net list allowing designers to formulate their design in a high level description such as VHDL. Logic synthesis provided two fundamental capabilities; automatic translation of high-level description into logic designs and optimization to decrease the circuit area and increase its speed. Many designs created with logic synthesis tools are as good as or better than those created manually, in terms of chip area occupied and IC signal speed. The ability to translate a high level description into a net list automatically can improve design efficiency markedly. It quickly gives designers an accurate estimate of their logic potential speed and chip real estate needs. In addition, designers can quickly implement a variety of architectural choices and compare area and speed characteristics. In a design methodology based on synthesis, the designer begins by describing a design's behavior in high-level code capturing its intended functionality rather than its implementation. Once the functionality has been thoroughly verified through simulation, the designer reformulates the design in terms of large structural blocks such as registers, arithmetic units, storage registers and combinational logic typically constitutes only about 20% of a chips area. This creation can easily absorb 80% of time in gate level design. The resulting description is called Register Transfer Level (RTL), since the equation describes how the data is transferred from one register to another. In a logic synthesis process, the tool's first step is to minimize the logical equations complexity and hence the size by finding the common terms that can be used repeatedly. In a translation step called technology mapping, the minimized equations are mapped into a set of gates. The non-synthesized portions of the logic are also mapped into a technology specific implementation at this point. Here the designer must choose the application specific integrated circuit (ASIC) vendor library in which to implement the chip, so that the logic synthesis tool may efficiently apply the gates available in that library. The primary consideration in the entire synthesis process is the quality of the resulting circuit. Quality in logic synthesis is measured by how close the circuit comes to meet the designer's speed, chip area and power goals. These goals can apply to the entire IC or the portions of the logic. Logic synthesis has achieved its greatest success on synchronous designs that have significant amounts of combinational logic. Asynchronous designs require that designers formulate timing constraints explicitly. Unlike the behavior of asynchronous designs, the behavior of synchronous designs is not affected by events such as the arrival of signals. By devising a set of constraints that a synthesis tools have to meet, the designer directs the process towards the most desirable solution. Although it might be desirable to built a given circuit that is both small and fast, area typically trades off with speed. Thus designers must choose the trade off point that is best for a specific application. When a designer starts a synthesis process by translating an RTL description into a netlist, the synthesis tools must first be able to understand the RTL description. A number of languages known as the Hardware Description Languages (HDLs) have been developed for this purpose. HDL statements can describe circuits in terms of the structure of the system or behavior or both. One reason HDLs are so powerful infact is that they support a wide variety of design descriptions. An HDL simulator handles all those descriptions, applying the same simulation and test vectors from the designs behavioral level all the way down to the gate level. This integrated approach reduces the problems. As logic synthesis matures, it will allow designers to concentrate more on the actual function and behavior rather than the details of the circuit. Logic synthesis tools are becoming capable of more behavior level tasks such as synthesizing sequential logic and deciding if and where the storage elements are needed in a design. Existing logic synthesis tools are moving up the design ladder while behavioral research is extending down to the RTL level. Eventually they will merge, giving designers a complete set of tools to automate designs from concept to layout. Scope of VHDL: VHDL satisfies all the requirements for the hierarchical description of electronic circuits from system level down to switch level. It can support all levels of timing specifications and constraints and is capable of detecting and signaling timing variations. The language models the reality of concurrency present in digital system and support the recursive nature of finite state machines. The concepts of packages and configurations allow the design libraries for the reuse of previously designed parts. Why VHDL? A design engineer A designer engineer in electronic industry uses hardware description language to keep pace with the productivity of the competitors. With VHDL we can quickly describe and synthesize circuits of several thousand gates. In addition VHDL provides the capability described as follows.

  • Power and flexibility
  • Device- Independent design
  • Portability
  • Benchmarking capabilities
  • ASIC migration
  • Quick time-to-market
  • and low cost Power and Flexibility VHDL has powerful language constructs with which we can write descriptions of complex control logic very easily.

It also has multiple levels of design descriptions for controlling the design implementation. It supports design libraries and creation of reusable components. It provides design hierarchies to create modular designs. It is one language for design and simulation. Device - Independent Design VHDL permits to create a design without having to first choose a device for implementation. With one design description, we can target many device architectures. With out being familiar it, we can optimize our design for resource utilization or performance. It permits multiple style of design description. Portability VHDL portability permits to simulate the same design description that we have synthesized. Simulating a large design description before synthesizing can save considerable time. As VHDL is a standard, design description can be taken from one simulator to another, one synthesis tool to another. One plat form to another means design description can be used in multiple projects. The Fig 23 illustrates that the source code for a design can be used with ant synthesis tool and the design can be implemented in any device that is supported by a synthesis tool. Fig 23 VHDL provides flexibility between compilers and device independent design Benchmarking Capabilities Device independent design and portability allows benchmarking a design using different device architectures and different synthesis tools. We can take a completed design description and synthesize it, create logic for it, evaluate the results and finally choose the device - a CPLD or an FPGA that best suits our design requirements ASIC Migration The efficiency that VHDL has allows our product to hit the market quickly if it has been synthesized on a CPLD or FPGA. When production volume reaches appropriate levels, VHDL facilitates the development of Application Specific Integrated Circuit (ASIC). Some times, the exact code used with CPLD can be used with the ASIC and because VHDL is a well-defined language, we can be assured that our ASIC vendor will deliver a device with expected functionality. Quick time-to-market and low cost VHDL and Programmable logic pair together facilitate a speedy design process. VHDL permits designs to be described quickly. Programmable logic eliminates NRE expenses and facilitates quick design iterations. Synthesis makes it all possible. VHDL and programmable logic combine as a powerful vehicle to bring the products in market in record time. Design Synthesis . The design process can be explained in six steps. 1. Define the design requirements 2. Describe the design in VHDL 3. Simulate the source code 4. Synthesize, optimize and fit (place and route) the design 5. Simulate the post layout (fit) design model 6. Program the device. Define the Design Requirements Before launching into writing code for our design, we must have a clear idea of design objectives and requirements. That is, the function of the design, required setup and clock-to-output times, maximum frequency of operation and critical paths. Describe the design in VHDL Formulate the Design: Having an idea of design requirements, we have to write an efficient code that is realized, through synthesis, to the logic implementation we intended. Code the design: after deciding upon the design methodology, we should code the design referring to the block, data flow and state diagrams such that the code is syntactically and sequentially correct. Simulate the source code With source code simulation, flaws can be detected early in the design cycle, allowing us to make corrections with the least possible impact to the schedule. This is more efficient for larger designs for which synthesis and place and route can take a couple of hours. Synthesize, optimize and fit the design Synthesis: it is a process by which netlists or equations are created from design descriptions, which may be abstract. VHDL synthesis software tools convert VHDL descriptions to technology specific netlists or set of equations. Optimization: The optimization process depends on three things: the form of the Boolean expressions, the type of resources available, and automatic or user applied synthesis directives (sometimes called constraints). Optimization for CPLD's involves reducing the logic to a minimal sum-of-products, which is then further optimized for a minimal literal count. This reduces the product term utilization and number of logic block inputs required for any given expression. Fig 24 illustrates the synthesis and optimization processes. Fig 24. Synthesis and Optimization processes Fitting: Fitting is the process of taking the logic produced by the synthesis and optimization process and placing it into a logic device, transforming the logic (if necessary) to obtain the best fit. It is a term typically used to describe the process of allocating resources for CPLD-type architectures. Placing and routing is the process of taking the logic produced by synthesis and optimization, transforming it if necessary, packing it into the FPGA logic structures (cells), placing the logic cells in optimal locations and routing the signals from logic cell to logic cell or I/O. Place and route tools have a large impact on the performance of FPGA designs. Propagation delays can depend significantly on routing delays. Fitting design in CPLD can be a complicated process because of numerous ways in which logic can be placed in the device. Before any placement, the logic equations have to be further optimized depending upon the available resources. Fig 25 shows the process of synthesizing, optimizing and fitting a design into a CPLD and an FPGA. Fig 25 The process of Synthesis to Design Implementation Simulate the post layout design model A post layout simulation will enable us to verify, not only functionality of our design, but also the timing, such as setup, clock-to-output, the register-to-register, and/or fit our design to a new logic device. Program the device After completing the design description, synthesizing, optimizing, fitting and successfully simulating our design, we are ready to program our device and continue work on the rest of our system designs. The synthesis, optimization, and fitting software will produce a file for use in programming the device. Design Tool Flow: The above topics cover the design process. The Fig 26 shows the EDA tool flow diagram. It shows the inputs and outputs for each tool used in the design process. Fig 26 Tool flow Diagram The inputs to the synthesis software are the VHDL design source code, synthesis directives and device selection. The output of the synthesis software - an architecture specific netlist or set of equations - is then used as the input to the fitter (or place and route software depending on whether the target device is a CPLD or FPGA). The outputs of this tool are information about resource utilization, static, point-to-point, timing analysis, a device programming file and a post layout simulation model. The simulation model along with a test bench or other stimulus format is used as the input to the simulation software. The output of the simulation software are often waveforms or data files. History of VHDL In the search for a standard design and documentation tool for the Very High Speed Integrated Circuits (VHSIC) program, the United States Department of Defense (DoD), in 1981, sponsored a workshop on Hardware Description Languages (HDL) at Woods Hole, Massachusetts. In 1983, the DoD established requirements for a standard VHSIC Hardware Description Language (VHDL) based on the recommendations of the "Woods Hole" workshop. A contract for the development of the VHDL, its environment and its software was awarded to IBM, Texas Instruments and Intermetrics Corporations. The time line of VHDL is as follows. * Woods Hole Requirements, 1981 * Intermetrics, TI, IBM under DoD contract 1983-1985: VHDL 7.2 * IEEE Standardization : VHDL 1987 * First synthesized chip, IBM 1988 * IEEE RESTANDARDISATION : VHDL 1993 Describing a design in VHDL: In VHDL an entity is used to describe a hardware module. An entity can be described using, 1. Entity declaration. 2. Architecture. 3. Configuration. 4. Package declaration. 5. Package body. Entity declaration: It defines the names, input output signals and modes of a hardware module. Syntax: entity entity_name is Port declaration; end entity_name; An entity declaration should starts with 'entity' and ends with 'end' keywords. Ports are interfaces through which an entity can communicate with its environment. Each port must have a name, direction and a type. An entity may have no port declaration also. The direction will be input, output or inout. In Port can be read Out Port can be written Inout Port can be read and written Buffer Port can be read and written, it can have only one source. Architecture: It describes the internal description of design or it tells what is there inside design. Each entity has at least one architecture and an entity can have many architecture. Architecture can be described using structural, dataflow, and behavioral or mixed style. Architecture can be used to describe a design at different levels of abstraction like gate level, register transfer level (RTL) or behavior level. Syntax: architecture architecture_name of entity_name architecture_declarative_part; begin Statements; end architecture_name; Here we should specify the entity name for which we are writing the architecture body. The architecture statements should be inside the begin and end keyword. Architecture declarative part may contain variables, constants, or component declaration. Configuration: If an entity contains many architectures andany one of the possible architecture binding with its entity is done using configuration. It is used to bind the architecture body to its entity and a component with an entity. Syntax: configuration configuration_name of entity_name is block_configuration; end configuration_name. Block_configuration defines the binding of components in a block. This can be written as for block_name component_binding; end for; block_name is the name of the architecture body. Component binding binds the components of the block to entities. This can be written as, for component_labels:component_name block_configuration; end for; Package declaration: A VHDL package declaration is identified by the package keyword, and is used to collect commonly used declarations for use globally among different design units. A package may be as a common storage area, one used to store such things as type declarations, constants, and global subprograms. Items defined within a package can be made visible to any other design unit in the complete VHDL design, and they can be compiled into libraries for later re-use. A package can consist of two basic parts: a package declaration and an optional package body. Package declarations can contain the following types of statements: • Type and subtype declarations • Constant declarations • Global signal declarations • Function and procedure declarations • Attribute specifications • File declarations • Component declarations • Alias declarations • Disconnect specifications • Use clauses Items appearing within a package declaration can be made visible to other design units through the use of a use statement. Syntax: package package_name is Declarations; end package_name; Package body: If the package contains declarations of subprograms (functions or procedures) or defines one or more deferred constants (constants whose value is not immediately given), then a package body is required in addition to the package declaration. A package body (which is specified using the package body keyword combination) must have the same name as its corresponding package declaration, but it can be located anywhere in the design, in the same or a different source file. A package body is used to declare the definitions and procedures that are declared in corresponding package. Values can be assigned to constants declared in package in package body. Syntax: package body package_name is Function_procedure definitions; end package_name; The Fig 2.6 shown below summarizes the various design units of VHDL Fig 27 Design units of VHDL Modeling Hardware with VHDL: The figure below shows the structure of entity. Fig 28 Structure of an Entity The internal working of an entity can be defined using different modeling styles inside architecture body. They are 1. Dataflow modeling. 2. Behavioral modeling. 3. Structural modeling. Dataflow modeling: In this style of modeling, the internal working of an entity can be implemented using concurrent signal assignment. The dataflow modeling often called register transfer logic, or RTL. There are some drawbacks to using a dataflow method of design in VHDL. First, there are no built-in registers in VHDL; the language was designed to be general-purpose and the emphasis was placed by VHDL's designers on its behavioral aspects. Behavioral modeling: The highest level of abstraction supported in VHDL is called the behavioral level of abstraction. When creating a behavioral description of a circuit, we describe our circuit in terms of its operation over time. The concept of time is the critical distinction between behavioral descriptions of circuits and lower-level descriptions (specifically descriptions created at the dataflow level of abstraction). Examples of behavioral forms of representation might include state diagrams, timing diagrams and algorithmic descriptions. In a behavioral description, the concept of time may be expressed precisely, with actual delays between related events (such as the propagation delays within gates and on wires), or it may simply be an ordering of operations that are expressed sequentially (such as in a functional description of a flipflop). In this style of modeling, the internal working of an entity can be implemented using set of statements. It contains: * Process statements * Sequential statements * Signal assignment statements * Wait statements Process statement is the primary mechanism used to model the behavior of an entity. It contains sequential statements, variable assignment (:=) statements or signal assignment (

Did you like this example?

Cite this page

The art and science. (2017, Jun 26). Retrieved November 21, 2024 , from
https://studydriver.com/the-art-and-science/

Save time with Studydriver!

Get in touch with our top writers for a non-plagiarized essays written to satisfy your needs

Get custom essay

Stuck on ideas? Struggling with a concept?

A professional writer will make a clear, mistake-free paper for you!

Get help with your assignment
Leave your email and we will send a sample to you.
Stop wasting your time searching for samples!
You can find a skilled professional who can write any paper for you.
Get unique paper

Hi!
I'm Amy :)

I can help you save hours on your homework. Let's start by finding a writer.

Find Writer