HISTORY OF CPU

EARLY COMPUTERS

In the first computers, CPUs were made of vacuum tubes and electric relays rather than microscopic transistors on computer chips. These early computers were immense and needed a great deal of power compared to today’s microprocessor-driven computers. The first general purpose electronic computer, the ENIAC (Electronic Numerical Integrator And Computer), was introduced in 1946 and filled a large room. About 18,000 vacuum tubes were used to build ENIAC’s CPU and input/output circuits. Between 1946 and 1956 all computers had bulky CPUs that consumed massive amounts of energy and needed continual maintenance, because the vacuum tubes burned out frequently and had to be replaced.


TRANSISTOR

A solution to the problems posed by vacuum tubes came in 1948, when American physicists John Bardeen, Walter Brattain, and William Shockley first demonstrated a revolutionary new electronic switching and amplifying device called the transistor. The transistor had the potential to work faster and more reliably and to consume much less power than a vacuum tube. Despite the overwhelming advantages transistors offered over vacuum tubes, it took nine years before they were used in a commercial computer. The first commercially available computer to use transistors in its circuitry was the UNIVAC (UNIVersal Automatic Computer), delivered to the United States Air Force in 1956.


THE INTEGRATED CIRCUIT (IC)

Development of the computer chip started in 1958 when Jack Kilby of Texas Instruments demonstrated that it was possible to integrate the various components of a CPU onto a single piece of silicon. These computer chips were called integrated circuits (ICs) because they combined multiple electronic circuits on the same chip. Subsequent design and manufacturing advances allowed transistor densities on integrated circuits to increase tremendously. The first ICs had only tens of transistors per chip compared to the millions or even billions of transistors per chip available on today’s CPUs.

In 1967 Fairchild Semiconductor introduced a single integrated circuit that contained all the arithmetic logic functions for an eight-bit processor. (A bit is the smallest unit of information used in computers. Multiples of a bit are used to describe the largest-size piece of data that a CPU can manipulate at one time.) However, a fully working integrated circuit computer required additional circuits to provide register storage, data flow control, and memory and input/output paths. Intel Corporation accomplished this in 1971 when it introduced the Intel 4004 microprocessor. Although the 4004 could only manage four-bit arithmetic, it was powerful enough to become the core of many useful hand calculators at the time. In 1975 Micro Instrumentation Telemetry Systems introduced the Altair 8800, the first personal computer kit to feature an eight-bit microprocessor. Because microprocessors were so inexpensive and reliable, computing technology rapidly advanced to the point where individuals could afford to buy a small computer. The concept of the personal computer was made possible by the advent of the microprocessor CPU. In 1978 Intel introduced the first of its x86 CPUs, the 8086 16-bit microprocessor. Although 32-bit microprocessors are most common today, microprocessors are becoming increasingly sophisticated, with many 64-bit CPUs available. High-performance processors can run with internal clock rates that exceed 3 GHz, or 3 billion clock pulses per second.


CURRENT DEVELOPMENTS

The competitive nature of the computer industry and the use of faster, more cost-effective computing continue the drive toward faster CPUs. The minimum transistor size that can be manufactured using current technology is fast approaching the theoretical limit. In the standard technique for microprocessor design, ultraviolet (short wavelength) light is used to expose a light-sensitive covering on the silicon chip. Various methods are then used to etch the base material along the pattern created by the light. These etchings form the paths that electricity follows in the chip. The theoretical limit for transistor size using this type of manufacturing process is approximately equal to the wavelength of the light used to expose the light-sensitive covering. By using light of shorter wavelength, greater detail can be achieved and smaller transistors can be manufactured, resulting in faster, more powerful CPUs. Printing integrated circuits with X-rays, which have a much shorter wavelength than ultraviolet light, may provide further reductions in transistor size that will translate to improvements in CPU speed.

Many other avenues of research are being pursued in an attempt to make faster CPUs. New base materials for integrated circuits, such as composite layers of gallium arsenide and gallium aluminum arsenide, may contribute to faster chips. Alternatives to the standard transistor-based model of the CPU are also being considered. Experimental ideas in computing may radically change the design of computers and the concept of the CPU in the future. These ideas include quantum computing, in which single atoms hold bits of information; molecular computing, where certain types of problems may be solved using recombinant DNA techniques; and neural networks, which are computer systems with the ability to learn.

Read more »

HOW A CPU WORKS

CPU FUNCTION

A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants.

As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order.


BRANCHING INSTRUCTIONS

The program counter in the CPU usually advances sequentially through the instructions. However, special instructions called branch or jump instructions allow the CPU to abruptly shift to an instruction location out of sequence. These branches are either unconditional or conditional. An unconditional branch always jumps to a new, out of order instruction stream. A conditional branch tests the result of a previous operation to see if the branch should be taken. For example, a branch might be taken only if the result of a previous subtraction produced a negative result. Data that are tested for conditional branching are stored in special locations in the CPU called flags.

CLOCK PULSES


The CPU is driven by one or more repetitive clock circuits that send a constant stream of pulses throughout the CPU’s circuitry. The CPU uses these clock pulses to synchronize its operations. The smallest increments of CPU work are completed between sequential clock pulses. More complex tasks take several clock periods to complete. Clock pulses are measured in Hertz, or number of pulses per second. For instance, a 2-gigahertz (2-GHz) processor has 2 billion clock pulses passing through it per second. Clock pulses are a measure of the speed of a processor.



FIXED-POINT AND FLOATING-POINT NUMBERS


Most CPUs handle two different kinds of numbers: fixed-point and floating-point numbers. Fixed-point numbers have a specific number of digits on either side of the decimal point. This restriction limits the range of values that are possible for these numbers, but it also allows for the fastest arithmetic. Floating-point numbers are numbers that are expressed in scientific notation, in which a number is represented as a decimal number multiplied by a power of ten. Scientific notation is a compact way of expressing very large or very small numbers and allows a wide range of digits before and after the decimal point. This is important for representing graphics and for scientific work, but floating-point arithmetic is more complex and can take longer to complete. Performing an operation on a floating-point number may require many CPU clock periods. A CPU’s floating-point computation rate is therefore less than its clock rate. Some computers use a special floating-point processor, called a coprocessor, that works in parallel to the CPU to speed up calculations using floating-point numbers. This coprocessor has become standard on many personal computer CPUs, such as Intel's Pentium chip.

Read more »

CENTRAL PROCESSING UNIT (CPU)

INTRODUCTION

Central Processing Unit (CPU), in computer science, microscopic circuitry that serves as the main information processor in a computer. A CPU is generally a single microprocessor made from a wafer of semiconducting material, usually silicon, with millions of electrical components on its surface. On a higher level, the CPU is actually a number of interconnected processing units that are each responsible for one aspect of the CPU’s function. Standard CPUs contain processing units that interpret and implement software instructions, perform calculations and comparisons, make logical decisions (determining if a statement is true or false based on the rules of Boolean algebra), temporarily store information for use by another of the CPU’s processing units, keep track of the current step in the execution of the program, and allow the CPU to communicate with the rest of the computer.

How A CPU Works
History Of CPU

Read more »

BUS NETWORK

Bus Network, in computer science, a topology (configuration) for a local area network in which all nodes are connected to a main communications line (bus). On a bus network, each node monitors activity on the line. Messages are detected by all nodes but are accepted only by the node(s) to which they are addressed. Because a bus network relies on a common data “highway,” a malfunctioning node simply ceases to communicate; it doesn't disrupt operation as it might on a ring network, in which messages are passed from one node to the next. To avoid collisions that occur when two or more nodes try to use the line at the same time, bus networks commonly rely on collision detection or Token Passing to regulate traffic.

Read more »

COMPUTER


Computer is an electronic device that receives input, stores and manipulates data, and provides output in a useful format at high speed. Computer performs tasks, such as calculations or electronic communication, under the control of a set of instructions called a program. Computers perform a wide variety of activities reliably, accurately, and quickly. The most versatile machine that man has ever created is computer. In the sectors of education, industries, government, medicine, scientific research, law and even in music and art computer has played a vital role. So without computers, life would certainly be difficult and different.

CHARACTERISTICS OF COMPUTER
Today, computers are found everywhere in offices, homes, schools and many other
places. Much of the world runs on computers, and computers have changed our lives. Some of the characteristics of computers, which make them the most essential part of every emerging technology, are listed below:

Speed
Computers work at tremendous speed which process data at an extremely fast rate. At present a powerful computer can perform billions of operations in just a second.
Milli second = A thousandth of a second (1/1,000)
Micro second = A millionth of a second (1/1,000,000)
Nano second = A billionth of a second (1/1,000,000,000)
Pico second = A trillionth of a second (1/1,000,000,000,000)

Accuracy
The computers are very accurate. The level of accuracy depends upon the instructions and the type of machine being used. Computer is capable of doing only what it is instructe
d to do. Inaccurate instructions for processing lead to inaccurate results. This is known as GIGO (Garbage In Garbage Out). Errors may occur in the results due to human factors rather than the technological weaknesses.

Automatic
Computers are automatic machines. Once a program is in the memory of a computer, no human intervention is needed; it allows the instructions step by step, executes them and terminates the execution when it receives the command to do so.

Storage capacity
Computers have got a main memory and the secondary storage systems. The main mem
ory of the computer is relatively small and it can hold only a certain amount of information. Therefore, the larger amount of data and information is stored in the secondary storage media such as magnetic disk and optical disk. Computers can also retrieve the information instantly when desired.

Diligence
Computer, being a machine, does not suffer from the human problems of tiredness and lack of concentration. It can continuously work for hours without making mistakes. Even if millions of calculations are to be performed, it will perform the last calculation with the same accuracy and speed as it has done in the first one.

LIMITATIONS OF COMPUTER
Computers have certain limitations too. As a machine, a computer can only perform what it is programmed to do. Computers lack decision-making power, computers cannot decide on their own. If an unanticipated situation arises, computers will either produce erroneous results or abandon the task altogether. They do not have the potential to work out an alternative solution


Read More about:

Read more »

BEST INPUT DEVICE AVAILABLE IN MARKET

INPUT DEVICE
An input device can be defined as an electromechanical device that allows the user to feed data and instructions into the computer for analysis, storage, and to give the commands to the computer. Data and instructions are entered into the computer’s main memory through an input device. Input device captures data and translates it into a form that can be broadly classified into the following categories:

Key board
A keyboard is the most common input device. Using a keyboard, the user can type the text and execute commands. Keyboard is designed to resemble a regular typewriter with a few additional keys. Data is entered into the computer by simply pressing various keys.
The layout of a keyboard comes in various styles but QWERTY is the most common layout. The layout of the keyboard has changed very little ever since it was introduced. In fact, the most common change in its technology has simply been the natural evolution of adding more keys that provide additional functionality. The number of keys on a keyboard varies from 82 keys to 108 keys. Portable computers such as laptops quite often have custom keyboards that have slightly different key arrangements from a standard keyboard.

Pointing Devices
Pointing devices are the input devices by which we can point out and select the items rapidly from the multiple options displayed on the screen. These devices can also be used to create graphic elements on the screen such as lines, curves and freehand shapes. The most common types of pointing devices available are;
a) Mouse b) Trackball c) Joystick
d) Touch Screen e) Light Pen d) Touch Pad

a) Mouse
A mouse is a small hand-held pointing device, which is used to create graphic elements on the screen such as lines, curves, and freehand shapes. It is also used to run a program and pull down a menu in GUI (Graphic User Interface) base computer system. It is rectangular shaped with a rubber ball embedded at its lower side and buttons on the top. Usually a mouse contains two or three buttons, whish can be used to input commands or instructions. The mouse may be classified into two categories:
I) Mechanical mouse II) Optical mouse

I) Mechanical mouse
A mechanical mouse uses a rubber ball at the bottom surface, which rotates as the mouse along a flat surface to move the cursor. It is the most common and expensive pointing device. Microsoft, IBM and Logitech are some well-known makers of the mechanical mouse.

II) Optical mouse
An optical mouse uses the light beam instead of rotating ball to detect the movement across a specially patterned mouse pad. As the mouse rolls the mouse on a flat surface, the cursor on the screen also moves in the direction of the mouse’s movement. It is more expensive in comparison with the mechanical mouse. Modern optical mouse are accurate and often do not need a mouse pad.

b) Track Ball
A track ball is another pointing device that uses a ball which is settled in a square cardle. In general, a track ball is just like a turned upside down a mouse. The ball is rolled by fingers to move the cursor around the screen. A track ball requires less space than a mouse for operation because the whole device is not moved for moving the curser. It is often attached to or inbuilt into the keyboard. The trackball built into the keyboard are commonly used in laptop computers, because the mouse is not practical for laptop users in a small space. This pointing device comes in various shapes but with the same functionality. It works like a mouse.

c) Joystick
Joystick is a device that moves in all directions and controls the movement of the cursor on the screen. The joystick offers three types of controls.
• Digital control
• Glide control
• Direct control
Digital control allows in a limited movement in a number of directions such as up,
down, left and right. Glid and direct controls allow movements in all directions (360 degree). The basic design of a joystick consists of a stick that is attached to plastic base with a flexible rubber sheath. It has some pushing buttons and a circuit board which is placed under the stick. Joysticks are mainly used for computer games, controlling industrial robots and for other applications such as flight simulators, training simulators, etc.

d) Light Pen
A light pen is a hand-held electro-optical pointing device which is connected to the computer by a cable. When it touches to a connected computer monitor, it will allow the computer to determine where on that screen the pen is pointed. It facilitates drawing images and selects objects on the display screen by directly pointing to the objects with the pen. Light pens give the user the full range of mouse capabilities, without using the pad and any horizontal surface. Using light pens, the user can interact more easily with applications in such modes as dragging and dropping or highlighting. It is very popular for graphic work in engineering like CAD (Computer Aided Design)

e) Touch Screen
A touch screen is a special kind of screen device, which is placed on the computer monitor in order to allow the direct selection or activation of the computer’s information, when somebody touches the screen. Essentially, it registers the input when a finger or other object to touch the screen. Touch screen is normally used to touch the screen. Touch screen is normally used to access the information with minimum effort. However, it is not suitable for input of large amount of data. Typically, they are used in information-providing systems like the hospital, airlines, railway reservation counters, amusement parks, etc.

f) Touch Pad
A touch pad is one of the latest pointing devices. It looks like a small gray window, about two inches wide. It is use din potable computer such as laptop and notebook to substitute the mouse. It has two buttons below or above the pads which work like the mouse buttons. You can move the cursor on the screen by making a finger or other object along the pad .And one can also click by tapping a finger on the touch pad, and drag with a tap in the continuous pointing scale.

Digital Camera
A digital camera is also an input device which stores pictures digitally rather than recording them on a film. Once a picture has been taken that stores on its chip memory. The picture can be downloaded to computer system and then manipulated with an image editing software. Then it can be printed. The major advantage of digital cameras is that making photos is both inexpensive and fast because there is no film processing.

Scanner
A scanner scans an image and transforms the image to ASCII codes and graphics. This can be edited, manipulated, combined and then printed. Scanners use a light beam to scan the input data. If the data to be scanned is an image, it can be changed by using the special image editing software. If the image is a page of a text, then the special optical character recognition software must be used to convert the image of letters in the text and this can be edited by using a word processor. Most of the scanners come with a utility program that allows it to communicate with the computer and save the scanned image as a graphic file on the computer. Commonly scanners are classified in two types:
• Hand-Held scanner
• Flat-Bed scanner

Microphone
A microphone is a speech recognition device. Speech recognition is one of the most interactive system to communicate with the computer. The user can simply instruct the computer about the task to be performed with the help of a microphone. It is the technology by which sounds, words or phrases spoken by humans are converted into digital signals, and these signals are transformed into the computer generated texts or commands. Most speech recognition systems are speaker-dependent so they must be separately trained for each individual user. The speech recognition systems system learns to voice of the user, who speaks isolated words repeatedly. Then, these voiced words are recognizable in the future. It is more popular in the corporate world among non-typists, people with disabilities, and business travelers who tap-record information for later transcription.

Graphic Digitizer
Graphic digitizer is an input device, which is used for converting pictures, maps and drawings into digital form for storage in computers. This enables re-creation of the drawing whenever required. It also facilitates any changes in the drawing whenever required.
A digitizer consists of a digital tablet (also known as graphics tablet) associated with a stylus. The digitizing tablet is a flat surface, which contains hundreds of fine copper wires forming a grid. Each copper wire receives electric pulses. The digitizing tablet can be spread over a working table, and is connected to computer.

Optical Scanners
New technologies have developed alternative methods to input data instead of entering data through keystrokes. Devices such as bar code reader can interpret machine printed marks or codes. Accordingly there are four types of optical recognition.
a) OCR (Optical Character Recognition)
b) OMR (Optical Mark Recognition)
c) MICR ( Magnetic-Ink Character Recognition)
d) BCR(Bar Code Reader)

Optical Character Recognition (OCR)
Optical Character Recognition is a process of scanning printed pages as image on a flatbed scanner using OCR software which recognizes the letters as ASCII text. The device used for this technology is called optical reader. In the OCR system book or a magazine article is fed directly into an electronic computer file, and then this file is edited by using a word processor. Advanced OCR systems can read the text large variety of fonts.

Optical Mark Recognition (OMR)
Optical Mark Recognition is the process of recognizing a pre-specified type of marks made by pencils or pen on the paper. This type of technology is used to evaluate the papers of competitive examination. Optical mark reading is done by special device called optical mark reader.

Magnetic-Ink Character Recognition (MICR)
Magnetic-Ink Character Recognition technology is used by the banking industry for faster processing of the large volume of cheques. This technology also ensures accuracy of data entry, because most of the information is pre-printed on the cheque and is directly fed to the computer. Magnetic ink character reader is a device used in the technology.

Bar Code Reader (BCR)
A bar code is a machine-readable code in the form of a pattern of parallel vertical lines. They are commonly used for labeling goods that are available in supermarkets, numbering books in libraries, etc. These codes/strips are sensed and read by a photoelectric device called bar code reader that reads the code by means of reflective light. The information recorded in a bar code reader is fed into the computer, which recognizes the information from the thickness and spacing of bars.

Read more »

.

About

website counter