Showing posts with label computer. Show all posts
Showing posts with label computer. Show all posts

Definition of Computer Virus, Protection


Computer viruses are the software programs that have the ability to clone itself and can operate without the knowledge or desire of the computer user. In other words, a computer virus is a program designed to spread itself by first infecting executable files or the system areas of hard and floppy disks and then making copies of itself. Computer virus can transfer from different

means to a computer without the knowledge and permission of the user and they can hide themselves in other files. Whenever a host file or program is used, the virus become active and performs destructive tasks such as dislocating, deleting and changing contents of files. It infects data or program every time the user runs the infected program and it takes advantages and replicates itself. It is the intellectual destructive creation of computer programmer.

In 1949, Dr. John Von Neumann introduced the concept of replicate computer program. The first replicating program named “Creeper” was reported during 1970 in the network system of American department of Defense. In 1983, an American electronic engineer ‘Fred Cohen’ had used the word “Computer Virus” in his research paper for the program that replicates andprevents other programs to be executed. In 1987, two Pakistani brothers , Amjad and Basti released the first IBM virus “C-Brain” to stop illegal reproduction of software developed from Alvi’s Brain Computer Shop. An Indonesian programmer released the first antivirus software in 1988 to detect the C-Brain virus. This antivirus software could remove C-Brain from a computer and immunized the system against fur

ther Brain attacks. After this event, people started to have much interest in viruses and various viruses have started to be produced.

The number of computer viruses is increasing day by day. The nature of virus varies from each other. Virus spread from computer to computer through electronic bulletin boards, telecommunication systems, and shared floppy disks, pen drives, compact disks and the Internet. Viruses are created by computer programmers for fun, but once they began to spread they take on a life of their own. Antivirus software are developed to protect from computer virus.

PURPOSE OF CREATING COMPUTER VIRUS

1. To stop the software privacy. Software

can be easily copied from one computer to another computer. In order to stop software piracy, the programmers of the software themselves create computer viruses.

2. To entertain the users by displaying interesting messages or pictures.

3. To steal data and information.

4. To remind the incidents that happened at different time.

5. To destroy data, information and files.

6. To expose their programming ability.

7. Computer viruses are made in order to earn the money.

Computer viruses activate when the infected files or programs are used. Once a virus is active it may replicate by various means and tries to infect other files or the operating system. When you copy files or programs from a infected computer, the viruses also transfer along with files or programs to the portable disk which in turn transfers viruses to another computer whenever it is used. So, mostly the computers get infected through the external sources. The most common ways through which viruses spread are:

· Sharing of infected external portable disk like floppy disk, pen drive or compact disk.

· Using pirated software.

· Opening of virus infected e-mail messages and attached files.

· Downloading files or programs from the web

site, which are not secured.

· Exchanging of data, information or files over a network.

The number of viruses is increasing daily and each virus possesses different characteristics. It is very difficult to know whether a computer is infected with viruses or not. You may see the following symptoms, if a computer is infected with computer viruses.

· Programs take more time to load, fail to load or hang frequently.

· Unexpected messages or images appear su

ddenly on the screen.

· Displays unusual error messages or encounters errors frequently.

· Missing of files or appearing of unexpected files.

· Displaying low memory message frequently.

· Programs open automatically without giving instruction.

PROTECTION FROM VIRUS

We have already known that, viruses are harmful to our computers. They affect our computer systems. Virus can damage our important files and programs. They make our computer slow. Similarly, viruses create several effects to our computers and they irritate the users frequently. So, protection and prevention of our computer from viruses is necessary. If we follow some tips, we can prevent computer from viruses.

Some general tips on prevention and protection from virus infections are as follows:

1. Install anti-virus software from a well known, reputable company and use it regularly.

2. Update the Anti-virus software frequently in order to get the latest virus definition and scan the hard disk using latest virus definition because new viruses come out every single day.

3. Install an ‘on access’ scanner and configure it to start automatically each time you boot your computer system. This will protect your system by checking for viruses each time your computer accesses an executable file.

4. Virus scans any programs or other files that may contain executable code before you run or open them, no matter where they come from. There have been the cases of commercially distributed floppy disks, pen drives and CD-ROMs spreading virus infections.

5. If your E-mail or news software has ability to automatically execute Java Script, word macros or other executable, code contained in or attached to a message, it strongly recommended that you should disable this feature.

6. Be extremely careful about accepting programs or other files during on-line chat session. This seems to be one of the more common means that people wind up with virus or Trojan horse problem.

7. Do backup your entire system on a regular basis. Because some viruses may erase or corrupt files on your hard disk and recent backup data can be recovered.

8. Before using the pen drives of others, check it whether it is virus infected or not. First scan and then only open it.

9. Do not use pirated software.

10. Lock the computer system using password to prevent your computer from being used by others.

11. Do not download any programs from Internet unless you are confirmed they are virus free.

12. Be careful! While checking mail having attached documents.

Read more »

HISTORY OF CPU

EARLY COMPUTERS

In the first computers, CPUs were made of vacuum tubes and electric relays rather than microscopic transistors on computer chips. These early computers were immense and needed a great deal of power compared to today’s microprocessor-driven computers. The first general purpose electronic computer, the ENIAC (Electronic Numerical Integrator And Computer), was introduced in 1946 and filled a large room. About 18,000 vacuum tubes were used to build ENIAC’s CPU and input/output circuits. Between 1946 and 1956 all computers had bulky CPUs that consumed massive amounts of energy and needed continual maintenance, because the vacuum tubes burned out frequently and had to be replaced.


TRANSISTOR

A solution to the problems posed by vacuum tubes came in 1948, when American physicists John Bardeen, Walter Brattain, and William Shockley first demonstrated a revolutionary new electronic switching and amplifying device called the transistor. The transistor had the potential to work faster and more reliably and to consume much less power than a vacuum tube. Despite the overwhelming advantages transistors offered over vacuum tubes, it took nine years before they were used in a commercial computer. The first commercially available computer to use transistors in its circuitry was the UNIVAC (UNIVersal Automatic Computer), delivered to the United States Air Force in 1956.


THE INTEGRATED CIRCUIT (IC)

Development of the computer chip started in 1958 when Jack Kilby of Texas Instruments demonstrated that it was possible to integrate the various components of a CPU onto a single piece of silicon. These computer chips were called integrated circuits (ICs) because they combined multiple electronic circuits on the same chip. Subsequent design and manufacturing advances allowed transistor densities on integrated circuits to increase tremendously. The first ICs had only tens of transistors per chip compared to the millions or even billions of transistors per chip available on today’s CPUs.

In 1967 Fairchild Semiconductor introduced a single integrated circuit that contained all the arithmetic logic functions for an eight-bit processor. (A bit is the smallest unit of information used in computers. Multiples of a bit are used to describe the largest-size piece of data that a CPU can manipulate at one time.) However, a fully working integrated circuit computer required additional circuits to provide register storage, data flow control, and memory and input/output paths. Intel Corporation accomplished this in 1971 when it introduced the Intel 4004 microprocessor. Although the 4004 could only manage four-bit arithmetic, it was powerful enough to become the core of many useful hand calculators at the time. In 1975 Micro Instrumentation Telemetry Systems introduced the Altair 8800, the first personal computer kit to feature an eight-bit microprocessor. Because microprocessors were so inexpensive and reliable, computing technology rapidly advanced to the point where individuals could afford to buy a small computer. The concept of the personal computer was made possible by the advent of the microprocessor CPU. In 1978 Intel introduced the first of its x86 CPUs, the 8086 16-bit microprocessor. Although 32-bit microprocessors are most common today, microprocessors are becoming increasingly sophisticated, with many 64-bit CPUs available. High-performance processors can run with internal clock rates that exceed 3 GHz, or 3 billion clock pulses per second.


CURRENT DEVELOPMENTS

The competitive nature of the computer industry and the use of faster, more cost-effective computing continue the drive toward faster CPUs. The minimum transistor size that can be manufactured using current technology is fast approaching the theoretical limit. In the standard technique for microprocessor design, ultraviolet (short wavelength) light is used to expose a light-sensitive covering on the silicon chip. Various methods are then used to etch the base material along the pattern created by the light. These etchings form the paths that electricity follows in the chip. The theoretical limit for transistor size using this type of manufacturing process is approximately equal to the wavelength of the light used to expose the light-sensitive covering. By using light of shorter wavelength, greater detail can be achieved and smaller transistors can be manufactured, resulting in faster, more powerful CPUs. Printing integrated circuits with X-rays, which have a much shorter wavelength than ultraviolet light, may provide further reductions in transistor size that will translate to improvements in CPU speed.

Many other avenues of research are being pursued in an attempt to make faster CPUs. New base materials for integrated circuits, such as composite layers of gallium arsenide and gallium aluminum arsenide, may contribute to faster chips. Alternatives to the standard transistor-based model of the CPU are also being considered. Experimental ideas in computing may radically change the design of computers and the concept of the CPU in the future. These ideas include quantum computing, in which single atoms hold bits of information; molecular computing, where certain types of problems may be solved using recombinant DNA techniques; and neural networks, which are computer systems with the ability to learn.

Read more »

HOW A CPU WORKS

CPU FUNCTION

A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants.

As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order.


BRANCHING INSTRUCTIONS

The program counter in the CPU usually advances sequentially through the instructions. However, special instructions called branch or jump instructions allow the CPU to abruptly shift to an instruction location out of sequence. These branches are either unconditional or conditional. An unconditional branch always jumps to a new, out of order instruction stream. A conditional branch tests the result of a previous operation to see if the branch should be taken. For example, a branch might be taken only if the result of a previous subtraction produced a negative result. Data that are tested for conditional branching are stored in special locations in the CPU called flags.

CLOCK PULSES


The CPU is driven by one or more repetitive clock circuits that send a constant stream of pulses throughout the CPU’s circuitry. The CPU uses these clock pulses to synchronize its operations. The smallest increments of CPU work are completed between sequential clock pulses. More complex tasks take several clock periods to complete. Clock pulses are measured in Hertz, or number of pulses per second. For instance, a 2-gigahertz (2-GHz) processor has 2 billion clock pulses passing through it per second. Clock pulses are a measure of the speed of a processor.



FIXED-POINT AND FLOATING-POINT NUMBERS


Most CPUs handle two different kinds of numbers: fixed-point and floating-point numbers. Fixed-point numbers have a specific number of digits on either side of the decimal point. This restriction limits the range of values that are possible for these numbers, but it also allows for the fastest arithmetic. Floating-point numbers are numbers that are expressed in scientific notation, in which a number is represented as a decimal number multiplied by a power of ten. Scientific notation is a compact way of expressing very large or very small numbers and allows a wide range of digits before and after the decimal point. This is important for representing graphics and for scientific work, but floating-point arithmetic is more complex and can take longer to complete. Performing an operation on a floating-point number may require many CPU clock periods. A CPU’s floating-point computation rate is therefore less than its clock rate. Some computers use a special floating-point processor, called a coprocessor, that works in parallel to the CPU to speed up calculations using floating-point numbers. This coprocessor has become standard on many personal computer CPUs, such as Intel's Pentium chip.

Read more »

CENTRAL PROCESSING UNIT (CPU)

INTRODUCTION

Central Processing Unit (CPU), in computer science, microscopic circuitry that serves as the main information processor in a computer. A CPU is generally a single microprocessor made from a wafer of semiconducting material, usually silicon, with millions of electrical components on its surface. On a higher level, the CPU is actually a number of interconnected processing units that are each responsible for one aspect of the CPU’s function. Standard CPUs contain processing units that interpret and implement software instructions, perform calculations and comparisons, make logical decisions (determining if a statement is true or false based on the rules of Boolean algebra), temporarily store information for use by another of the CPU’s processing units, keep track of the current step in the execution of the program, and allow the CPU to communicate with the rest of the computer.

How A CPU Works
History Of CPU

Read more »

BUS NETWORK

Bus Network, in computer science, a topology (configuration) for a local area network in which all nodes are connected to a main communications line (bus). On a bus network, each node monitors activity on the line. Messages are detected by all nodes but are accepted only by the node(s) to which they are addressed. Because a bus network relies on a common data “highway,” a malfunctioning node simply ceases to communicate; it doesn't disrupt operation as it might on a ring network, in which messages are passed from one node to the next. To avoid collisions that occur when two or more nodes try to use the line at the same time, bus networks commonly rely on collision detection or Token Passing to regulate traffic.

Read more »

COMPUTER


Computer is an electronic device that receives input, stores and manipulates data, and provides output in a useful format at high speed. Computer performs tasks, such as calculations or electronic communication, under the control of a set of instructions called a program. Computers perform a wide variety of activities reliably, accurately, and quickly. The most versatile machine that man has ever created is computer. In the sectors of education, industries, government, medicine, scientific research, law and even in music and art computer has played a vital role. So without computers, life would certainly be difficult and different.

CHARACTERISTICS OF COMPUTER
Today, computers are found everywhere in offices, homes, schools and many other
places. Much of the world runs on computers, and computers have changed our lives. Some of the characteristics of computers, which make them the most essential part of every emerging technology, are listed below:

Speed
Computers work at tremendous speed which process data at an extremely fast rate. At present a powerful computer can perform billions of operations in just a second.
Milli second = A thousandth of a second (1/1,000)
Micro second = A millionth of a second (1/1,000,000)
Nano second = A billionth of a second (1/1,000,000,000)
Pico second = A trillionth of a second (1/1,000,000,000,000)

Accuracy
The computers are very accurate. The level of accuracy depends upon the instructions and the type of machine being used. Computer is capable of doing only what it is instructe
d to do. Inaccurate instructions for processing lead to inaccurate results. This is known as GIGO (Garbage In Garbage Out). Errors may occur in the results due to human factors rather than the technological weaknesses.

Automatic
Computers are automatic machines. Once a program is in the memory of a computer, no human intervention is needed; it allows the instructions step by step, executes them and terminates the execution when it receives the command to do so.

Storage capacity
Computers have got a main memory and the secondary storage systems. The main mem
ory of the computer is relatively small and it can hold only a certain amount of information. Therefore, the larger amount of data and information is stored in the secondary storage media such as magnetic disk and optical disk. Computers can also retrieve the information instantly when desired.

Diligence
Computer, being a machine, does not suffer from the human problems of tiredness and lack of concentration. It can continuously work for hours without making mistakes. Even if millions of calculations are to be performed, it will perform the last calculation with the same accuracy and speed as it has done in the first one.

LIMITATIONS OF COMPUTER
Computers have certain limitations too. As a machine, a computer can only perform what it is programmed to do. Computers lack decision-making power, computers cannot decide on their own. If an unanticipated situation arises, computers will either produce erroneous results or abandon the task altogether. They do not have the potential to work out an alternative solution


Read More about:

Read more »

BEST INPUT DEVICE AVAILABLE IN MARKET

INPUT DEVICE
An input device can be defined as an electromechanical device that allows the user to feed data and instructions into the computer for analysis, storage, and to give the commands to the computer. Data and instructions are entered into the computer’s main memory through an input device. Input device captures data and translates it into a form that can be broadly classified into the following categories:

Key board
A keyboard is the most common input device. Using a keyboard, the user can type the text and execute commands. Keyboard is designed to resemble a regular typewriter with a few additional keys. Data is entered into the computer by simply pressing various keys.
The layout of a keyboard comes in various styles but QWERTY is the most common layout. The layout of the keyboard has changed very little ever since it was introduced. In fact, the most common change in its technology has simply been the natural evolution of adding more keys that provide additional functionality. The number of keys on a keyboard varies from 82 keys to 108 keys. Portable computers such as laptops quite often have custom keyboards that have slightly different key arrangements from a standard keyboard.

Pointing Devices
Pointing devices are the input devices by which we can point out and select the items rapidly from the multiple options displayed on the screen. These devices can also be used to create graphic elements on the screen such as lines, curves and freehand shapes. The most common types of pointing devices available are;
a) Mouse b) Trackball c) Joystick
d) Touch Screen e) Light Pen d) Touch Pad

a) Mouse
A mouse is a small hand-held pointing device, which is used to create graphic elements on the screen such as lines, curves, and freehand shapes. It is also used to run a program and pull down a menu in GUI (Graphic User Interface) base computer system. It is rectangular shaped with a rubber ball embedded at its lower side and buttons on the top. Usually a mouse contains two or three buttons, whish can be used to input commands or instructions. The mouse may be classified into two categories:
I) Mechanical mouse II) Optical mouse

I) Mechanical mouse
A mechanical mouse uses a rubber ball at the bottom surface, which rotates as the mouse along a flat surface to move the cursor. It is the most common and expensive pointing device. Microsoft, IBM and Logitech are some well-known makers of the mechanical mouse.

II) Optical mouse
An optical mouse uses the light beam instead of rotating ball to detect the movement across a specially patterned mouse pad. As the mouse rolls the mouse on a flat surface, the cursor on the screen also moves in the direction of the mouse’s movement. It is more expensive in comparison with the mechanical mouse. Modern optical mouse are accurate and often do not need a mouse pad.

b) Track Ball
A track ball is another pointing device that uses a ball which is settled in a square cardle. In general, a track ball is just like a turned upside down a mouse. The ball is rolled by fingers to move the cursor around the screen. A track ball requires less space than a mouse for operation because the whole device is not moved for moving the curser. It is often attached to or inbuilt into the keyboard. The trackball built into the keyboard are commonly used in laptop computers, because the mouse is not practical for laptop users in a small space. This pointing device comes in various shapes but with the same functionality. It works like a mouse.

c) Joystick
Joystick is a device that moves in all directions and controls the movement of the cursor on the screen. The joystick offers three types of controls.
• Digital control
• Glide control
• Direct control
Digital control allows in a limited movement in a number of directions such as up,
down, left and right. Glid and direct controls allow movements in all directions (360 degree). The basic design of a joystick consists of a stick that is attached to plastic base with a flexible rubber sheath. It has some pushing buttons and a circuit board which is placed under the stick. Joysticks are mainly used for computer games, controlling industrial robots and for other applications such as flight simulators, training simulators, etc.

d) Light Pen
A light pen is a hand-held electro-optical pointing device which is connected to the computer by a cable. When it touches to a connected computer monitor, it will allow the computer to determine where on that screen the pen is pointed. It facilitates drawing images and selects objects on the display screen by directly pointing to the objects with the pen. Light pens give the user the full range of mouse capabilities, without using the pad and any horizontal surface. Using light pens, the user can interact more easily with applications in such modes as dragging and dropping or highlighting. It is very popular for graphic work in engineering like CAD (Computer Aided Design)

e) Touch Screen
A touch screen is a special kind of screen device, which is placed on the computer monitor in order to allow the direct selection or activation of the computer’s information, when somebody touches the screen. Essentially, it registers the input when a finger or other object to touch the screen. Touch screen is normally used to touch the screen. Touch screen is normally used to access the information with minimum effort. However, it is not suitable for input of large amount of data. Typically, they are used in information-providing systems like the hospital, airlines, railway reservation counters, amusement parks, etc.

f) Touch Pad
A touch pad is one of the latest pointing devices. It looks like a small gray window, about two inches wide. It is use din potable computer such as laptop and notebook to substitute the mouse. It has two buttons below or above the pads which work like the mouse buttons. You can move the cursor on the screen by making a finger or other object along the pad .And one can also click by tapping a finger on the touch pad, and drag with a tap in the continuous pointing scale.

Digital Camera
A digital camera is also an input device which stores pictures digitally rather than recording them on a film. Once a picture has been taken that stores on its chip memory. The picture can be downloaded to computer system and then manipulated with an image editing software. Then it can be printed. The major advantage of digital cameras is that making photos is both inexpensive and fast because there is no film processing.

Scanner
A scanner scans an image and transforms the image to ASCII codes and graphics. This can be edited, manipulated, combined and then printed. Scanners use a light beam to scan the input data. If the data to be scanned is an image, it can be changed by using the special image editing software. If the image is a page of a text, then the special optical character recognition software must be used to convert the image of letters in the text and this can be edited by using a word processor. Most of the scanners come with a utility program that allows it to communicate with the computer and save the scanned image as a graphic file on the computer. Commonly scanners are classified in two types:
• Hand-Held scanner
• Flat-Bed scanner

Microphone
A microphone is a speech recognition device. Speech recognition is one of the most interactive system to communicate with the computer. The user can simply instruct the computer about the task to be performed with the help of a microphone. It is the technology by which sounds, words or phrases spoken by humans are converted into digital signals, and these signals are transformed into the computer generated texts or commands. Most speech recognition systems are speaker-dependent so they must be separately trained for each individual user. The speech recognition systems system learns to voice of the user, who speaks isolated words repeatedly. Then, these voiced words are recognizable in the future. It is more popular in the corporate world among non-typists, people with disabilities, and business travelers who tap-record information for later transcription.

Graphic Digitizer
Graphic digitizer is an input device, which is used for converting pictures, maps and drawings into digital form for storage in computers. This enables re-creation of the drawing whenever required. It also facilitates any changes in the drawing whenever required.
A digitizer consists of a digital tablet (also known as graphics tablet) associated with a stylus. The digitizing tablet is a flat surface, which contains hundreds of fine copper wires forming a grid. Each copper wire receives electric pulses. The digitizing tablet can be spread over a working table, and is connected to computer.

Optical Scanners
New technologies have developed alternative methods to input data instead of entering data through keystrokes. Devices such as bar code reader can interpret machine printed marks or codes. Accordingly there are four types of optical recognition.
a) OCR (Optical Character Recognition)
b) OMR (Optical Mark Recognition)
c) MICR ( Magnetic-Ink Character Recognition)
d) BCR(Bar Code Reader)

Optical Character Recognition (OCR)
Optical Character Recognition is a process of scanning printed pages as image on a flatbed scanner using OCR software which recognizes the letters as ASCII text. The device used for this technology is called optical reader. In the OCR system book or a magazine article is fed directly into an electronic computer file, and then this file is edited by using a word processor. Advanced OCR systems can read the text large variety of fonts.

Optical Mark Recognition (OMR)
Optical Mark Recognition is the process of recognizing a pre-specified type of marks made by pencils or pen on the paper. This type of technology is used to evaluate the papers of competitive examination. Optical mark reading is done by special device called optical mark reader.

Magnetic-Ink Character Recognition (MICR)
Magnetic-Ink Character Recognition technology is used by the banking industry for faster processing of the large volume of cheques. This technology also ensures accuracy of data entry, because most of the information is pre-printed on the cheque and is directly fed to the computer. Magnetic ink character reader is a device used in the technology.

Bar Code Reader (BCR)
A bar code is a machine-readable code in the form of a pattern of parallel vertical lines. They are commonly used for labeling goods that are available in supermarkets, numbering books in libraries, etc. These codes/strips are sensed and read by a photoelectric device called bar code reader that reads the code by means of reflective light. The information recorded in a bar code reader is fed into the computer, which recognizes the information from the thickness and spacing of bars.

Read more »

FIRST GENERATION COMPUTERS

FIRST GENERATION OF COMPUTERS

During the period of 1940 to 1956 first generation of computers were developed. The first generation computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. The vacuum tube was developed by Lee DeForest. A vacuum tube is a device generally used to amplify a signal by controlling the movement of electrons in an evacuated space. First generation computers were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions.

CHARACTERISTICS
1) First generation computers were based on vacuum tubes.
2) The operating systems of the first generation computers were very slow.
3) They were very large in size.
4) Production of the heat was in large amount in first generation computers.
5) Machine language was used for programming.
6) First generation computers were unreliable.
7) They were difficult to program and use.

UNIVAC, EDVAC, EDSAC and ENIAC computers are examples of first generation computing devices.

Read About Second Generation of Computer

Read more »

SECOND GENERATION COMPUTERS


SECOND GENERATION COMPUTERS

During the period of 1956 to 1963 second generation of computers were developed. The second generation computers emerged with development of Transistors. The transistor was invented in 1947 by three scientists J. Bardeen, H.W. Brattain and W. Shockley. A transistor is a small device made up of semiconductor material like germanium and silicon. Even though the Transistor were developed in 1947 but was not widely used until the end of 50s. The transistor made the second generation computers faster, smaller, cheaper, more energy-efficient and more reliable than their first-generation computers. Even though the transistor used in the computer generated enormous amount of heat which ultimately would lead to the damage of the computers but was far better than vacuum tubes.
Second generation computers used the low level language i.e. machine level language and assembly language which made the programmers easier to specify the instructions. Later on High level language programming were introduced such as COBOL and FORTRAN. Magnetic core was used as primary storage. Second generation computer has faster input /output devices which thus brought improvement in the computer.
CHARACTERISTICS
1) Transistors were used in place of vacuum tubes.
2) Second generation computers were smaller in comparison with the first generation computers.
3) They were faster in comparison with the first generation computers.
4) They generated less heat and were less prone to failure.
5) They took comparatively less computational time.
6) Assembly language was used for programming.
7) Second generation computers has faster input/output devices.

IBM 7000, NCR 304, IBM 650, IBM 1401, ATLAS and Mark III are the examples of second generation computers.

Read About Third Generation Computer

Read more »

THIRD GENERATION COMPUTERS


THIRD GENERATION COMPUTERS

During the period of 1964 to 1971 Third generation computers were developed. The third generation computers emerged with the development of IC (Integrated Circuits). The invention of the IC was the greatest achievement done in the period of third generation of computers. IC was invented by Robert Noyce and Jack Kilby in 1958-59. IC is a single component containing a number of transistors. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.
Keyboards and monitors developed during the period of third generation of computers. The third generation computers interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory.

CHARACTERISTICS
1) IC was used instead of transistors in the third generation computers.
2) Third generation computers were smaller in size and cheaper as compare to the second generation computers.
3) They were fast and more reliable.
4) High level language was developed.
5) Magnetic core and solid states as main storage.
6) They were able to reduce computational time and had low maintenance cost.
7) Input/Output devices became more sophisticated.

PDP-8, PDP-11, ICL 2900, IBM 360 and IBM 370 are the examples of third generation computers.

Read About Fourth Generation Computer

Read more »

FOURTH GENERATION COMPUTERS


FOURTH GENERATION COMPUTERS

After 1971 the fourth generation computers were built. The fourth generation computers were the extension of third generation technology. The fourth generation computers emerged with development of the VLSI (Very Large Scale Integration).With the help of VLSI technology microprocessor came into existence. The computers were designed by using microprocessor, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The fourth generation computers became more powerful, compact, reliable and affordable. As a result, they give rise to personal computer (PC) revolution.
For the first time in 1981 IBM introduced its computer for the home user and in 1984 Apple introduced the Macintosh Microprocessor.

CHARACTERISTICS
1) The fourth generation computers have microprocessor-based systems.
2) They are the cheapest among all the computer generation.
3) The speed, accuracy and reliability of the computers were improved in fourth generation computers.
4) Many high-level languages were developed in the fourth generation such as COBOL, FORTRAN, BASIC, PASCAL and C language.
5) A Further refinement of input/output devices was developed.
6) Networking between the systems was developed.

IBM 4341, DEC 10, STAR 1000, PUP 11 and APPLE II are the examples of fourth generation computers.

Read About Fifth Generation Computer

Read more »

FIFTH GENERATION COMPUTERS


FIFTH GENERATION COMPUTERS
Fifth generation computers are in developmental stage which is based on the artificial intelligence. The goal of the fifth generation is to develop the device which could respond to natural language input and are capable of learning and self-organization. Quantum computation and molecular and nanotechnology will be used in this technology. So we can say that the fifth generation computers will have the power of human intelligence.

CHARACTERISTICS
1) The fifth generation computers will use super large scale integrated chips.
2) They will have artificial intelligence.
3) They will be able to recognize image and graphs.
4) Fifth generation computer aims to be able to solve highly complex problem including decision making, logical reasoning.
5) They will be able to use more than one CPU for faster processing speed.
6) Fifth generation computers are intended to work with natural language.

Read more »

UNIVAC

Universal Automatic Computer (UNIVAC)



















UNIVAC (Universal Automatic Computer) was the first commercially general purpose electronic computer. John Eckert and John Mauchly at the Moore School of Engineering, Pennsylvania developed it in 1951. It was used for the analysis of 1952 Presidential Election in the United States. It was 8 feet high, 15 feet long and weighed 5 tons. It contained 5600 tubes, 18000 crystal diodes, and 300 relays. A magnetic tape was used for data input and output.

Read more »

EDSAC

Electronic Delay Storage Automatic Calculator (EDSAC)



















EDSAC (Electronic Delay Storage Automatic Calculator) was developed by a group of scientists, headed by Professor Maurice Wilkes at Cambridge University, England, in 1949. It was also based on the stored program concept and one of the first to use binary digits. The input and output were provided by a paper tape. It could do about 700 additions per second and 200 multiplications per second. The machine occupied a room, which measured 5/4 meters.


Read more »

EDVAC

Electronic Discrete Variable Automatic Computer (EDVAC)














John Mauchly and J.P. Eckert also proposed the development of EDVAC. The conceptual design for EDVAC electronic computer to use the stored program concept introduced by John Von Neumann. Unlike the ENIAC, it used binary number rather than decimal. The University of Pennsylvania built the EDVAC for the U.S. Army’s Ballistics Research Laboratory at the Aberdeen Proving Ground. EDVAC had almost 6000 vacuum tubes and 12000 diodes. It consumed 56kW of power. It covered 490 feet square of floor and weighed 7850kg.

Read more »

ENIAC

Electronic Numerical Integrator and Calculator (ENIAC)














Electronic Numerical Integrator and Calculator (ENIAC) was designed by John Mauchly and John Presper Eckert in 1946 at the Moore School of Electrical Engineering, University of Pennsylvania. It was the first electronic computer. ENIAC was initially built for the United States military to calculate the paths of artillery shells. It contained 18000 vacuum tubes, 7200 crystal diodes, 1500 relays, 70000 resistors, 10000 capacitors and around 5 million hand-soldered joints. It weighed nearly 30 tons and consumed 160 kW of power. Input was possible from an IBM card reader while an IBM punch card was used for output.

Read more »

.

About

website counter