Ans. Typically a modern assembler creates object code by translating assembly instruction mnemonics into opcodes, and by resolving symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution—e.g., to generate common short sequences of instructions to run inline, instead of in a subroutine.
Assemblers are generally simpler to write than compilers for high-level languages, and have been available since the 1950s. Modern assemblers, especially for RISC based architectures, such as MIPS, Sun SPARC, and HP PA-RISC, as well as x86(-64), optimize instruction scheduling to exploit the CPU pipeline efficiently.
There are two types of assemblers based on how many passes through the source are needed to produce the executable program. One-pass assemblers go through the source code once and assumes that all symbols will be defined before any instruction that references them. Two-pass assemblers (and multi-pass assemblers) create a table with all unresolved symbols in the first pass, then use the 2nd pass to resolve these addresses. The advantage in one-pass assemblers is speed - which is not as important as it once was with advances in computer speed and capabilities. The advantage of the two-pass assembler is that symbols can be defined anywhere in the program source. As a result, the program can be defined in a more logical and meaningful way. This makes two-pass assembler programs easier to read and maintain.
More sophisticated high-level assemblers provide language abstractions such as:
Note that, in normal professional usage, the term assembler is often used ambiguously: It is frequently used to refer to an assembly language itself, rather than to the assembler utility. Thus: "CP/CMS was written in S/360 assembler" as opposed to "ASM-H was a widely-used S/370 assembler.
A program written in assembly language consists of a series of instructions--mnemonics that correspond to a stream of executable instructions, when translated by an assembler, that can be loaded into memory and executed.
For example, an x86/IA-32 processor can execute the following binary instruction as expressed in machine language:
The equivalent assembly language representation is easier to remember (example in Intel syntax, more mnemonic):
This instruction means:
The mnemonic "mov" represents the opcode 1011 which moves the value in the second operand into the register indicated by the first operand. The mnemonic was chosen by the instruction set designer to abbreviate "move", making it easier for the programmer to remember. A comma-separated list of arguments or parameters follows the opcode; this is a typical assembly language statement.
In practice many programmers drop the word mnemonic and, technically incorrectly, call "mov" an opcode. When they do this they are referring to the underlying binary code which it represents. To put it another way, a mnemonic such as "mov" is not an opcode, but as it symbolizes an opcode, one might refer to "the opcode mov" for example when one intends to refer to the binary opcode it symbolizes rather than to the symbol -- the mnemonic -- itself. As few modern programmers have need to be mindful of actually what binary patterns are the opcodes for specific instructions, the distinction has in practice become a bit blurred among programmers but not among processor designers.
Transforming assembly into machine language is accomplished by an assembler, and the reverse by a disassembler. Unlike in high-level languages, there is usually a one-to-one correspondence between simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero (on the result of the set instruction)". Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences.
Each computer architecture and processor architecture has its own machine language. On this level, each instruction is simple enough to be executed using a relatively small number of electronic circuits. Computers differ by the number and type of operations they support. For example, a new 64-bit machine would have different circuitry from a 32-bit machine. They may also have different sizes and numbers of registers, and different representations of data types in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences.
Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the manufacturer and used in its documentation.
Any Assembly language consists of 3 types of instruction statements which are used to define the program operations:
Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, an opcode is a symbolic name for a single executable machine language instruction, and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value, or a pair of values. Operands can be either immediate (typically one byte values, coded in the instruction itself) or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works.
There are instructions used to define data elements to hold data and variables. They define what type of data, length and alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined.
Assembly Directives are instructions that are executed by the Assembler at assembly time, not by the CPU at run time. They can make the assembly of the program dependent on parameters input by the programmer, so that one program can be assembled different ways, perhaps for different applications. They also can be used to manipulate presentation of the program to make it easier for the programmer to read and maintain.
(For example, pseudo-ops would be used to reserve storage areas and optionally their initial contents.) The names of pseudo-ops often start with a dot to distinguish them from machine instructions.
Some assemblers also support pseudo-instructions, which generate two or more machine instructions.
Symbolic assemblers allow programmers to associate arbitrary names (labels or symbols) with memory locations. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination).
Most assemblers provide flexible symbol management, allowing programmers to manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses.
Assembly languages, like most other computer languages, allow comments to be added to assembly source code that are ignored by the assembler. Good use of comments is even more important with assembly code than with higher-level languages, as the meaning and purpose of a sequence of instructions is harder to decipher from the code itself.
Wise use of these facilities can greatly simplify the problems of coding and maintaining low-level code. Raw assembly source code as generated by compilers or disassemblers — code without any comments, meaningful symbols, or data definitions — is quite difficult to read when changes must be made.
Many assemblers support macros, programmer-defined symbols that stand for some sequence of text lines. This sequence of text lines may include a sequence of instructions, or a sequence of data storage pseudo-ops. Once a macro has been defined using the appropriate pseudo-op, its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them just as though they had appeared in the source code file all along (including, in better assemblers, expansion of any macros appearing in the replacement text).
Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be much shorter (require less lines of source code from the application programmer - as with a higher level language). They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded de-bugging code via parameters and other similar features.
Many assemblers have built-in macros for system calls and other special code sequences.
Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macros, and allowing macros to save context or exchange information. Thus a macro might generate a large number of assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language, since such programmers are not working with a computer's lowest-level conceptual elements.
Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems; this was done, for example, by systems programmers working with IBM's Conversational Monitor System/Virtual Machine (CMS/VM) and with IBM's "real time transaction processing" add-ons, CICS, Customer Information Control System, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large Global Distribution Systems (GDS) and credit card systems today.
It was also possible to use solely the macro processing capabilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in Cobol using a pure macro assembler program containing lines of Cobol code inside assembly time operators instructing the assembler to generate arbitrary code.
This was because, as was realized in the 1970s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing in fact appeared in and appears in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Note that unlike certain previous macro processors inside assemblers, the C preprocessor was not Turing-complete because it lacked the ability to either loop or "go to", the latter allowing the programmer to loop.
Despite the power of macro processing, it fell into disuse in high level languages while remaining a perennial for assemblers.
This was because many programmers were rather confused by macro parameter substitution and did not disambiguate macro processing from assembly and execution.
Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro: foo: macro a load a*b the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter a-c, an unexpected macro expansion occurs.
To avoid this, users of macro processors learned to religiously parenthesize formal parameters inside macro definitions, and callers had to do the same to their "actual" parameters.
PL/I and C feature macros, but this facility was underused or dangerous when used because they can only manipulate text. On the other hand, homoiconic languages, such as Lisp, Prolog, and Forth, retain the power of assembly language macros because they are able to manipulate their own code as data.
Some assemblers have incorporated structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Dr. H.D. Mills (March, 1970), and implemented by Marvin Kessler at IBM's Federal Systems Division, which extended the S/360 macro assembler with IF/ELSE/ENDIF and similar control flow blocks.[3] This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 80s (the latter days of large-scale assembly language use).
A curious design was A-natural, a "stream-oriented" assembler for 8080/Z80 processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler, because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans.
There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development.In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages.[
Ans. A simple instruction in assembly language such as "add 4 and 5" may look like 00111101001 in machine language . How computer realize which is "add" instruction and which are numbers in the set of above binary numbers ?
actually an instruction like add 4 and 5 would translate into something like
MVI R1, 4
MVI R2, 5
ADD R1, R2
in assembly
It would then be translated into its sequence of opcodes as
00111100
00111101
01111100
now assume that the electronic circuit of your processor looks at high bits as a high voltage of say 5V and the low bits as 0 V so as soon as the 1st instruction is read (00111100), into memory the relevant electronic voltages are generated and 4 gets stored in Register R1. Similarly 5 gets stored in register R2. The last instruction selects the adder circuit and passes the contents of the two registers to it as input which then outputs the sum which then gets stored in one register as well.
Ans. In computer science, a low-level programming language is a language that provides little or no abstraction from a computer's instruction set architecture. The word "low" refers to the small or nonexistent amount of abstraction between the language and machine language; because of this, low-level languages are sometimes described as being "close to the hardware."
A low-level language does not need a compiler or interpreter to run; the processor for which the language was written is able to run the code without using either of these.
By comparison, a high-level programming language isolates the execution semantics of a computer architecture from the specification of the program, making the process of developing a program simpler and more understandable.
Low-level programming languages are sometimes divided into two categories: first generation, and second generation.
The first-generation programming language, or 1GL, is machine code. It is the only language a microprocessor can understand directly. Currently, programmers almost never write programs directly in machine code, because not only does it (like assembly language) require attention to numerous details which a high-level language would handle automatically, but it also requires memorizing or looking up numerical codes for every instruction that is used. For this reason, second generation programming languages abstract the machine code one level.
Example: A function in 32-bit x86 machine code to calculate the nth Fibonacci number:
8B542408 83FA0077 06B80000 0000C383
FA027706 B8010000 00C353BB 01000000
B9010000 008D0419 83FA0376 078BD98B
C84AEBF1 5BC3
The second-generation programming language, or 2GL, is assembly language. It is considered a second-generation language because while it is not a microprocessor's native language, an assembly language programmer must still understand the microprocessor's unique architecture (such as its registers and instructions). These simple instructions are then assembled directly into machine code. The assembly code can also be abstracted to another layer in a similar manner as machine code is abstracted into assembly code.
Example: The same Fibonacci number calculator as above, but in x86 assembly language using MASM syntax:
fib:
mov edx, [esp+8]
cmp edx, 0
ja @f
mov eax, 0
ret
@@:
cmp edx, 2
ja @f
mov eax, 1
ret
@@:
push ebx
mov ebx, 1
mov ecx, 1
@@:
lea eax, [ebx+ecx]
cmp edx, 3
jbe @f
mov ebx, ecx
mov ecx, eax
dec edx
jmp @b
@@:
pop ebx
ret
Assembly languages are close to a one to one correspondence between symbolic instructions and executable machine codes. Assembly languages also include directives to the assembler, directives to the linker, directives for organizing data space, and macros. Macros can be used to combine several assembly language instructions into a high level language-like construct (as well as other purposes). There are cases where a symbolic instruction is translated into more than one machine instruction. But in general, symbolic assembly language instructions correspond to individual executable machine instructions.
High level languages are abstract. Typically a single high level instruction is translated into several (sometimes dozens or in rare cases even hundreds) executable machine language instructions. Some early high level languages had a close correspondence between high level instructions and machine language instructions. For example, most of the early COBOL instructions translated into a very obvious and small set of machine instructions. The trend over time has been for high level languages to increease in abstraction. Modern object oriented programming languages are highly abstract (although, interestingly, some key object oriented programming constructs do translate into a very compact set of machine instructions).
Assembly language is much harder to program than high level languages. The programmer must pay attention to far more detail and must have an intimate knowledge of the processor in use. But high quality hand crafted assembly language programs can run much faster and use much less memory and other resources than a similar program written in a high level language. Speed increases of two to 20 times faster are fairly common, and increases of hundreds of times faster are occassionally possible. Assembly language programming also gives direct access to key machine features essential for implementing certain kinds of low level routines, such as an operating system kernel or microkernel, device drivers, and machine control.
High level programming languages are much easier for less skilled programmers to work in and for semi-technical managers to supervise. And high level languages allow faster development times than work in assembly language, even with highly skilled programmers. Development time increases of 10 to 100 times faster are fairly common. Programs written in high level languages (especially object oriented programming languages) are much easier and less expensive to maintain than similar programs written in assembly language (and for a successful software project, the vast majority of the work and expense is in maintenance, not initial development).
Ans. Assemblers can perform operations of search and sort. Below are the examples:
Sorting:
Again: MOV FLAG, 0 ; FLAG? 0
Next: MOV AL, [BX]
CMP AL, [BX+1] ; Compare current and next values
JLE Skip ; Branch if current< next values
XCHG AL, [BX+1] ; If not, Swap the contents of the
MOV [BX+1], AL ; current location with the next one
MOV FLAG, 1 ; Indicate the swap
Skip: INC BX ; BX? BX +1
LOOP Next ; Go to next value
CMP FLAG, 1 ; Was there any swap
JE Again ; If yes Repeat process
RET
Searching:
MOV FLAG, 0 ; FLAG? 0
Next: CMP AX, [BX + SI] ; Compare current value to VAL
JE Found ; Branch if equal
ADD SI, 2 ; SI? SI + 2, next value
LOOP Next ; Go to next value
JMP Not_Found
Found: MOV FLAG, 1 ; Indicate value found
MOV POSITION, SI ; Return index of value in list
Not_Found: RET
Ans. The following information describes some of the changes that are specific to assembler programs:
In the TPF 4.1 system, assembler programs were limited to 4 KB in size; in the z/TPF system, assembler programs can be larger than 4 KB. To exploit this capability, you can change your assembler programs to use:
o The CLINKC, RLINKC, and SLINKC assembler linkage macros
o Multiple base registers
o Baseless instructions.
* You can use the CALLC general macro in assembler programs to call C language functions.
* In the TPF 4.1 system, the TMSPC and TMSEC macros were provided to set up the interface between C language programs and macro service routines written in assembler language. In the z/TPF system, the PRLGC and EPLGC macros set up this interface by simulating the prolog and epilog code generated by the GCC.
The PRLGC and EPLGC macros were provided on the TPF 4.1 system through APAR PJ29640 so that new C library functions written on the TPF 4.1 system can be migrated with little or no changes; and the TMSPC and TMSEC macros are still supported on the z/TPF system so that library functions that were already coded with those macros can be migrated with little or no code changes. New library functions that are developed for z/TPF system must be coded with the PRLGC and EPLGC macros.
Ans. A computer language in which each statement corresponds to one of the binary instructions recognized by the CPU. Assembly-language programs are translated into machine code by an assembler.
Assembly languages are more cumbersome to use than regular (or high-level) programming languages, but they are much easier to use than pure machine languages, which require that all instructions be written in binary code.
Complete computer programs are seldom written in assembly language. Instead, assembly language is used for short procedures that must run as fast as possible or must do special things to the computer hardware. For example, Figure 17 shows a short routine that takes a number, checks whether it is in the range 97 to 122 inclusive, and subtracts 32 if so, otherwise leaving the number unchanged. (That particular subtraction happens to convert all lowercase ASCII codes to their uppercase equivalents.)
This particular assembly language is for the Intel 8086 family of processors (which includes all PC-compatible computers); assembly languages for other processors are different. Everything after the semicolon in each line is a comment, ignored by the computer. Two lines (PROC and ENDP) are pseudo instructions; they tell the assembler how the program is organized. All the other lines translate directly into binary codes for the CPU.
Many of the most common operations in computer programming are hard to implement in assembly language. For example, there are no assembly language statements to open a file, print a number, or compute a square root. For these functions the programmer must write complicated routines from scratch, use services provided by the operating system, or call routines in a previously written library.
There are two types of assemblers based on how many passes through the source are needed to produce the executable program. One-pass assemblers go through the source code once and assumes that all symbols will be defined before any instruction that references them. Two-pass assemblers (and multi-pass assemblers) create a table with all unresolved symbols in the first pass, then use the 2nd pass to resolve these addresses. The advantage in one-pass assemblers is speed - which is not as important as it once was with advances in computer speed and capabilities. The advantage of the two-pass assembler is that symbols can be defined anywhere in the program source. As a result, the program can be defined in a more logical and meaningful way. This makes two-pass assembler programs easier to read and maintain.
More sophisticated high-level assemblers provide language abstractions such as:
The translation performed by an assembler is essentially a collection of substitutions:
Except for one factor these substitutions could all be performed in one sequential pass over source text.The factor is the forward reference(reference to an instruction which has not yet been scanned by assembler).
Now it's that the separate passes of two pass assemblers are required to handle forward references without restriction.
Now if we impose certain restriction that means handling forward references without making two passes. These different sets of restrictions lead to one pass assembler.
And these one-pass assembler are particularly attractive when secondary storage is either slow or missing entirely, as on many small machines.
The modern assembler. (2017, Jun 26).
Retrieved November 21, 2024 , from
https://studydriver.com/the-modern-assembler/
A professional writer will make a clear, mistake-free paper for you!
Get help with your assignmentPlease check your inbox
Hi!
I'm Amy :)
I can help you save hours on your homework. Let's start by finding a writer.
Find Writer