Skip to main content

Computer Architecture lec-3 Computer System Organization

Hi, today we'll going to learn about  computer system organization & Processors so, are you ready for it.....................you will be.

Computer System Organization
A digital computer consists of an interconnected system of processors, memories, and (I/O) input/output devices, This lecture is an introduction to these three components and to their interconnection, as background for the detailed examination of specific levels in the five succeeding lectures. Processors, memories, and (I/O) input/output are key concepts and will be present at every level, so we'll start our study of computer architecture by looking at all three in turn.

Processors
The organization of a simple bus-oriented computer is shown in Picture below. The CPU (Central Processing Unit) is the ‘‘brain’’ of the computer. Its function is to execute programs stored in the main memory by fetching their instructions, examining them, and then executing them one after another. The components are connected by a bus, which is a collection of parallel wires for transmitting address, data, and control signals. Buses can be external to the CPU, connecting it to memory and I/O devices, but also internal to the CPU, as we will see shortly.

The CPU is composed of several distinct parts. The control unit is responsible for fetching instructions from main memory and determining their type. The arithmetic logic unit performs operations such as addition and Boolean AND needed to carry out the instructions.

The CPU also contains a small, high-speed memory used to store temporary results and certain control information. This memory is made up of a number of registers, each of which has a certain size and function. Usually, all the registers have the same size. Each register can hold one number, up to some maximum determined by the size of the register. Registers can be read and written at high speed since they are internal to the CPU.
Registers
The most important register is the Program Counter (PC), which points to the next instruction to be fetched for execution. ( The name ‘‘program counter’’ is somewhat misleading because it has nothing to do with counting anything, but the term is universally used. Also important is the Instruction Register (IR), which holds the instruction currently being executed. ( Most computers have numerous other registers as well, some of them general purpose as well as some for specific purposes.
CPU Organization 
The data path of a typical von Neumann machine

The internal organization of part of a typical von Neumann CPU is shown right side picture in more detail This part is called the data path and consists of the registers
(typically 1 to 32), the ALU (Arithmetic Logic Unit), and several buses connecting the pieces, The registers feed into two ALU input registers, labeled A and B in the pic. These registers hold the ALU input while the ALU is performing some computation.

ALU(Arithmetic logical unit)
The ALU itself performs addition, subtraction, and other simple operations on its inputs, thus yielding a result in the output register. This output register can be stored back into a register. Later on, the register can be written (i.e., stored) into memory, if desired. Not all designs have the A, B, and output registers. Most instructions can be divided into one of two categories: register-memory or register-register. Register-memory instructions allow memory words to be 
fetched into registers, where they can be used as ALU inputs in subsequent instructions, for example. (‘‘Words’’ are the units of data moved between memory and registers. A word might be an integer. We will discuss memory organization later in next lecture.) Other register-memory instructions allow registers to be stored back into memory, The other kind of instruction is register-register. A typical register register instruction fetches two operands from the registers, brings them to the ALU  input registers, performs some operation on them, for example, addition or Boolean AND, and stores the result back in one of the registers. The process of running two operands through the ALU and storing the result is called the data path cycle and is the heart of most CPUs. To a considerable extent, it defines what the machine can do.
The faster the data path cycle is, the faster the machine runs.
Instruction Execution
The CPU executes each instruction in a series of small steps. Roughly speaking, the steps are as follows:
1- Fetch the next instruction from memory into the instruction register.
2- Change the program counter to point to the following instruction.
3- Determine the type of instruction just fetched.
4- If the instruction uses a word in memory, determine where it is.
5- Fetch the word, if needed, into a CPU register.
6- Execute the instruction.

RISC vs CISC
In 1980, a group at Berkeley led by David Patterson and Carlo Se´quin began designing VLSI CPU chips that did not use interpretation (Patterson, 1985 Patterson and Se´quin, 1982), They coined the term RISC for this concept and named their CPU chip the
"RISC I CPU" followed shortly by the "RISC II".
At the time these simple processors were being first designed, the characteristic that caught everyone’s attention was the relatively small number of instructions available, typically around 50. This number was far smaller than the 200 to 300 on established computers such as the DEC VAX and the large IBM mainframes. In fact, the acronym RISC stands for Reduced Instruction Set Computerwhich was contrasted with CISC, which stands for Complex Instruction Set Computer (a thinly-veiled reference to the VAX, which dominated university Computer Science Departments at the time). Nowadays few people think that the size of the instruction set is a major issue, but the name stuck.

To make a long story short, a great religious war ensued, with the RISC supporters attacking the established order (VAX, Intel, large IBM mainframes). They claimed that the best way to design a computer was to have a small number of simple instructions that execute in one cycle of the data path. namely, fetching two registers, combining them somehow (e.g., adding or ANDing them), and storing the result back in a register, Their argument was that even if a RISC machine takes four or five instructions to do what a CISC machine does in one instruction, if the RISC instructions are 10 times as fast (because they are not interpreted), RISC wins. It is also worth pointing out that by this time the speed of main memories had caught up to the speed of read-only control stores, so the interpretation penalty had greatly increased, strongly favoring RISC machines. One might think that given the performance advantages of RISC technology, RISC machines (such as the Sun UltraSPARC) would have mowed over CISC machines (such as the Intel Pentium) in the marketplace. Nothing like this has happened. Why not?

First of all, there is the issue of backward compatibility and the billions of dollars companies have invested in software for the Intel line. Second, surprisingly, Intel has been able to employ the same ideas even in a CISC architecture. CPUs The net result is that common instructions are fast and less common instructions are slow. While this hybrid approach is not as fast as a pure RISC design, it gives competitive overall performance while still allowing old software to run unmodified. 

Note: (for more details visit this site).  

Comments

Popular posts from this blog

Numerical Analysis lec-3 False Position Method

Hi, today we'll going to learn the second method of numerical analysis the False Position Method out of 40 methods FALSE POSITION METHOD False Position Method is an alternative method and is more efficient then bisection method, it is the oldest method for finding the real root of an equation F( X ) = 0 and is similar to the bisection method. Consider the equation F( X ) = 0 let [ X0 , X1 ] be two different values of X such that  F( X0 ) - F( X1 ) < 0. OR A simple modification of a secant method produces a method which is usually converges. the new method is called regula falsi (False Position) and also Linear interpolation . it needs two initial approximations X0 and X1 so that F( X0 ) F( X1 ) < 0 i.e the two functions must have the opposite signs. The Modified False Position Method In this method the F( X ) value of a stagnant end point is halved if that point has repeated twice or more. the end point that repeats its called a stagnant point, the excepti

Numerical Analysis lec-4 Newton Raphson method

Hi, Today we'll learn about  Newton-Raphson Method  " Newton's Method ". this method is for finding successively better approximate solutions to the roots of a function (real valued). so are you ready for this............you will. Newton's Method: The Newton's method is one of the most powerful and well known method used for finding a root of f(X) = 0 there are many ways to derive Newton-Raphson method . the simplest way to derive this formula is by using the first two terms in the Taylor series expansion of the form.  Instead of working with the chord joining points on the graph of y= f(X) as in bisection method and false position method s. Newton's method uses the tangent at one point on the graph of y= f(X) therefor it requires only one instead of two initial guesses. Suppose X is an initial guesses draw the tangent line at X= X0 and procedure it to meet the X - axis at X = X1 Newton's Method which shows in picture above. now from righ

Software Engineering lec-2 About Software Engineering!

Hi, today we'll going to learn about Software requirements, specifications, applications and lots of more let's start with software engineering definition. Software Engineering -Software engineering is an engineering discipline that is concerned with all aspects of   software production form the early stages of system specification through to maintaining   the system after it has gone into use. -The economies of all developed nations are dependent on software. -More and more systems are software controlled software engineering is concerned with   theories, methods and tools for professional software development. -Expenditure on software represents a significant fraction of GNP in all developed countries. - Engineering discipline:  Using appropriate theories and methods to solve problems bearing in mind organizational   and financial constraints. - All aspects of software productions:  Not just technical process of development also project management and the develop