Definition, Bus, Ring, Star Topology

The computers on LAN can be physically connected with the wires in different manner as the requirement of an organization or office. The manner in which the computers on the LAN are connected is known as LAN Topology. So, network topology is the physical layout of cabling for connecting computers on the network. It can be defined as the arrangement or connection pattern of computers on a LAN. A LAN topology describes how the computers are physically connected and how do they communicate on the network. It determines the data paths that may be used between any pair of nodes of the network. There are three basic network topologies. They are Bus topology, Ring topology and Star topology.

BUS TOPOLOGY

In a bus topology computers are arranged in the linear format. So, it is called Linear Topology. In this topology, all nodes are connected directly to the common cable with the help of T-connectors. The common cable is also known as also known as a network bus or trunk. The network bus acts as a backbone to the network.Many different lengths of co-axial cables are used in this type of topology. On the both side of the network bus (i.e. coaxial cable), BNC (Bayonet Naur Connector) jacks are connected. A T-connector is used to join segments of cables and computers. The BNC jack on each side of network bus is connected to the T-connector i.e. top the T-connector is connected to the NIC card of a Computer. The T-connectors connected to the last computers on both sides are attached with terminators.

In this network topology, the position of the server is not fixed i.e. can be any where on the network. When any node sends the data, the data passes on both directions in the form of packets through the bus and reaches to all the nodes. Since each data packet contains the data bits and the destination address, only the destination node accepts the data packets. The terminators at both end sides absorb the packets or signals travelling on the bus to prevent the bouncing of the signals which causes interference.

ADVANTAGES

a. Since each small segments of cables are joined to form a trunk or network bus it is easy to setup computers on the bus.

b. Since nodes are arranged in the linear form, it requires the less amounts of cables.

c. The coaxial cables used for networking are inexpensive and joining connectors on the cables is also easy.

d. Failure of any node does not affect other nodes on the topology.

e. Well suited for temporary networks (quick set

up).

DISADVANTAGES

a. If the backbone cable i.e. network bus has problem then the entire network fails.

b. Finding fault on this topology is not easy.

c. It provides limited flexibility for change, so adding or removing nodes in between is not easy.

d. The performance degrades when the

number of computers is more on the. so, it is not suitable for big size network.

RING TOPOLOGY

In a ring topology, all nodes are arranged in the shape of a circle (ring). Both ends of a cable are connected to the nodes so there is no any point like a bus topology. Since the both ends are connected to the nodes there is no any terminator in this topology. In this topology, many different lengths of co-axial cables are used according to distance of computers. In this topology each computer acts like a repeater that boosts an incoming signal before passing it on to the next computer.

In this topology, data or messages are transmitted in one direction either clockwise or anticlockwise. When any node sends a message or data, the message or data reaches to the first node on the circle. If the first node in the circle is the destination node then it absorbs the data or message otherwise it regenerates the signal and passes to another node on the loop and so on. If the message or data is not absorbed by any node then it is absorbed by the sender node.

ADVANTAGES

a. Since each node on the ring acts as a repeater, no any external repeater is required to boost up the signals.

b. It supports high data transmission. Rate.

c. It is easy to setup.

DISADVANTAGES

a. If any node or connecting cable fails the entire network does not work.

b. The diagnosis of the fault is difficult.

c. Since data or message reaches on the node in sequence, so addition of few nodes increases the communication delays.

d. It provides limited flexibility for change,

so adding or removing nodes in between is not easy.

STAR TOPOLOGY

Star Topology is the most popular topology used to connect computers and other network devices on the network. In a star topology all nodes are connected through a centrally located device in the form of star. But the shape of arrangement of computers is not necessarily to be star. The device whic

h connects computers on the network is either a hub or a switch. A hub or a switch has connecting ports or slots where the wires running from each node are connected. A twisted pair cable (specially unshielded twisted pair cable) is used for connecting a computer and a hub or switch. Each segment of UTP cable is attached with RJ-45 jacks. And one side of the UTP cable is connected to the node and another side is connected to the hub or switch. When any node sends data or message, the data or message reaches to the hub or switch and then to the targeted computer on the network.

ADVANTAGES

a. Computers can be added or removed easily without affecting the network.

b. If any of the workstation or the connecting cable fails, it does not affect the remaining portion of the network.

c. Fault detection in the star topology is easy.

d. It is easy to extend so it is suitable for a large network.

e. It is one of the reliable network topology.

DISADVANTAGES

a. Since each node is required to connect with the centralized hub or switch more cables are needed which increases the cost of installation.

b. The entire network fails if there is any problem on the hub or switch.

c. In comparison to Linear and Ring topologies, it is little expensive as it requires more length of cables and other controlling devices.

Read more »

Definition of Computer Virus, Protection


Computer viruses are the software programs that have the ability to clone itself and can operate without the knowledge or desire of the computer user. In other words, a computer virus is a program designed to spread itself by first infecting executable files or the system areas of hard and floppy disks and then making copies of itself. Computer virus can transfer from different

means to a computer without the knowledge and permission of the user and they can hide themselves in other files. Whenever a host file or program is used, the virus become active and performs destructive tasks such as dislocating, deleting and changing contents of files. It infects data or program every time the user runs the infected program and it takes advantages and replicates itself. It is the intellectual destructive creation of computer programmer.

In 1949, Dr. John Von Neumann introduced the concept of replicate computer program. The first replicating program named “Creeper” was reported during 1970 in the network system of American department of Defense. In 1983, an American electronic engineer ‘Fred Cohen’ had used the word “Computer Virus” in his research paper for the program that replicates andprevents other programs to be executed. In 1987, two Pakistani brothers , Amjad and Basti released the first IBM virus “C-Brain” to stop illegal reproduction of software developed from Alvi’s Brain Computer Shop. An Indonesian programmer released the first antivirus software in 1988 to detect the C-Brain virus. This antivirus software could remove C-Brain from a computer and immunized the system against fur

ther Brain attacks. After this event, people started to have much interest in viruses and various viruses have started to be produced.

The number of computer viruses is increasing day by day. The nature of virus varies from each other. Virus spread from computer to computer through electronic bulletin boards, telecommunication systems, and shared floppy disks, pen drives, compact disks and the Internet. Viruses are created by computer programmers for fun, but once they began to spread they take on a life of their own. Antivirus software are developed to protect from computer virus.

PURPOSE OF CREATING COMPUTER VIRUS

1. To stop the software privacy. Software

can be easily copied from one computer to another computer. In order to stop software piracy, the programmers of the software themselves create computer viruses.

2. To entertain the users by displaying interesting messages or pictures.

3. To steal data and information.

4. To remind the incidents that happened at different time.

5. To destroy data, information and files.

6. To expose their programming ability.

7. Computer viruses are made in order to earn the money.

Computer viruses activate when the infected files or programs are used. Once a virus is active it may replicate by various means and tries to infect other files or the operating system. When you copy files or programs from a infected computer, the viruses also transfer along with files or programs to the portable disk which in turn transfers viruses to another computer whenever it is used. So, mostly the computers get infected through the external sources. The most common ways through which viruses spread are:

· Sharing of infected external portable disk like floppy disk, pen drive or compact disk.

· Using pirated software.

· Opening of virus infected e-mail messages and attached files.

· Downloading files or programs from the web

site, which are not secured.

· Exchanging of data, information or files over a network.

The number of viruses is increasing daily and each virus possesses different characteristics. It is very difficult to know whether a computer is infected with viruses or not. You may see the following symptoms, if a computer is infected with computer viruses.

· Programs take more time to load, fail to load or hang frequently.

· Unexpected messages or images appear su

ddenly on the screen.

· Displays unusual error messages or encounters errors frequently.

· Missing of files or appearing of unexpected files.

· Displaying low memory message frequently.

· Programs open automatically without giving instruction.

PROTECTION FROM VIRUS

We have already known that, viruses are harmful to our computers. They affect our computer systems. Virus can damage our important files and programs. They make our computer slow. Similarly, viruses create several effects to our computers and they irritate the users frequently. So, protection and prevention of our computer from viruses is necessary. If we follow some tips, we can prevent computer from viruses.

Some general tips on prevention and protection from virus infections are as follows:

1. Install anti-virus software from a well known, reputable company and use it regularly.

2. Update the Anti-virus software frequently in order to get the latest virus definition and scan the hard disk using latest virus definition because new viruses come out every single day.

3. Install an ‘on access’ scanner and configure it to start automatically each time you boot your computer system. This will protect your system by checking for viruses each time your computer accesses an executable file.

4. Virus scans any programs or other files that may contain executable code before you run or open them, no matter where they come from. There have been the cases of commercially distributed floppy disks, pen drives and CD-ROMs spreading virus infections.

5. If your E-mail or news software has ability to automatically execute Java Script, word macros or other executable, code contained in or attached to a message, it strongly recommended that you should disable this feature.

6. Be extremely careful about accepting programs or other files during on-line chat session. This seems to be one of the more common means that people wind up with virus or Trojan horse problem.

7. Do backup your entire system on a regular basis. Because some viruses may erase or corrupt files on your hard disk and recent backup data can be recovered.

8. Before using the pen drives of others, check it whether it is virus infected or not. First scan and then only open it.

9. Do not use pirated software.

10. Lock the computer system using password to prevent your computer from being used by others.

11. Do not download any programs from Internet unless you are confirmed they are virus free.

12. Be careful! While checking mail having attached documents.

Read more »

HISTORY OF CPU

EARLY COMPUTERS

In the first computers, CPUs were made of vacuum tubes and electric relays rather than microscopic transistors on computer chips. These early computers were immense and needed a great deal of power compared to today’s microprocessor-driven computers. The first general purpose electronic computer, the ENIAC (Electronic Numerical Integrator And Computer), was introduced in 1946 and filled a large room. About 18,000 vacuum tubes were used to build ENIAC’s CPU and input/output circuits. Between 1946 and 1956 all computers had bulky CPUs that consumed massive amounts of energy and needed continual maintenance, because the vacuum tubes burned out frequently and had to be replaced.


TRANSISTOR

A solution to the problems posed by vacuum tubes came in 1948, when American physicists John Bardeen, Walter Brattain, and William Shockley first demonstrated a revolutionary new electronic switching and amplifying device called the transistor. The transistor had the potential to work faster and more reliably and to consume much less power than a vacuum tube. Despite the overwhelming advantages transistors offered over vacuum tubes, it took nine years before they were used in a commercial computer. The first commercially available computer to use transistors in its circuitry was the UNIVAC (UNIVersal Automatic Computer), delivered to the United States Air Force in 1956.


THE INTEGRATED CIRCUIT (IC)

Development of the computer chip started in 1958 when Jack Kilby of Texas Instruments demonstrated that it was possible to integrate the various components of a CPU onto a single piece of silicon. These computer chips were called integrated circuits (ICs) because they combined multiple electronic circuits on the same chip. Subsequent design and manufacturing advances allowed transistor densities on integrated circuits to increase tremendously. The first ICs had only tens of transistors per chip compared to the millions or even billions of transistors per chip available on today’s CPUs.

In 1967 Fairchild Semiconductor introduced a single integrated circuit that contained all the arithmetic logic functions for an eight-bit processor. (A bit is the smallest unit of information used in computers. Multiples of a bit are used to describe the largest-size piece of data that a CPU can manipulate at one time.) However, a fully working integrated circuit computer required additional circuits to provide register storage, data flow control, and memory and input/output paths. Intel Corporation accomplished this in 1971 when it introduced the Intel 4004 microprocessor. Although the 4004 could only manage four-bit arithmetic, it was powerful enough to become the core of many useful hand calculators at the time. In 1975 Micro Instrumentation Telemetry Systems introduced the Altair 8800, the first personal computer kit to feature an eight-bit microprocessor. Because microprocessors were so inexpensive and reliable, computing technology rapidly advanced to the point where individuals could afford to buy a small computer. The concept of the personal computer was made possible by the advent of the microprocessor CPU. In 1978 Intel introduced the first of its x86 CPUs, the 8086 16-bit microprocessor. Although 32-bit microprocessors are most common today, microprocessors are becoming increasingly sophisticated, with many 64-bit CPUs available. High-performance processors can run with internal clock rates that exceed 3 GHz, or 3 billion clock pulses per second.


CURRENT DEVELOPMENTS

The competitive nature of the computer industry and the use of faster, more cost-effective computing continue the drive toward faster CPUs. The minimum transistor size that can be manufactured using current technology is fast approaching the theoretical limit. In the standard technique for microprocessor design, ultraviolet (short wavelength) light is used to expose a light-sensitive covering on the silicon chip. Various methods are then used to etch the base material along the pattern created by the light. These etchings form the paths that electricity follows in the chip. The theoretical limit for transistor size using this type of manufacturing process is approximately equal to the wavelength of the light used to expose the light-sensitive covering. By using light of shorter wavelength, greater detail can be achieved and smaller transistors can be manufactured, resulting in faster, more powerful CPUs. Printing integrated circuits with X-rays, which have a much shorter wavelength than ultraviolet light, may provide further reductions in transistor size that will translate to improvements in CPU speed.

Many other avenues of research are being pursued in an attempt to make faster CPUs. New base materials for integrated circuits, such as composite layers of gallium arsenide and gallium aluminum arsenide, may contribute to faster chips. Alternatives to the standard transistor-based model of the CPU are also being considered. Experimental ideas in computing may radically change the design of computers and the concept of the CPU in the future. These ideas include quantum computing, in which single atoms hold bits of information; molecular computing, where certain types of problems may be solved using recombinant DNA techniques; and neural networks, which are computer systems with the ability to learn.

Read more »

HOW A CPU WORKS

CPU FUNCTION

A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants.

As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order.


BRANCHING INSTRUCTIONS

The program counter in the CPU usually advances sequentially through the instructions. However, special instructions called branch or jump instructions allow the CPU to abruptly shift to an instruction location out of sequence. These branches are either unconditional or conditional. An unconditional branch always jumps to a new, out of order instruction stream. A conditional branch tests the result of a previous operation to see if the branch should be taken. For example, a branch might be taken only if the result of a previous subtraction produced a negative result. Data that are tested for conditional branching are stored in special locations in the CPU called flags.

CLOCK PULSES


The CPU is driven by one or more repetitive clock circuits that send a constant stream of pulses throughout the CPU’s circuitry. The CPU uses these clock pulses to synchronize its operations. The smallest increments of CPU work are completed between sequential clock pulses. More complex tasks take several clock periods to complete. Clock pulses are measured in Hertz, or number of pulses per second. For instance, a 2-gigahertz (2-GHz) processor has 2 billion clock pulses passing through it per second. Clock pulses are a measure of the speed of a processor.



FIXED-POINT AND FLOATING-POINT NUMBERS


Most CPUs handle two different kinds of numbers: fixed-point and floating-point numbers. Fixed-point numbers have a specific number of digits on either side of the decimal point. This restriction limits the range of values that are possible for these numbers, but it also allows for the fastest arithmetic. Floating-point numbers are numbers that are expressed in scientific notation, in which a number is represented as a decimal number multiplied by a power of ten. Scientific notation is a compact way of expressing very large or very small numbers and allows a wide range of digits before and after the decimal point. This is important for representing graphics and for scientific work, but floating-point arithmetic is more complex and can take longer to complete. Performing an operation on a floating-point number may require many CPU clock periods. A CPU’s floating-point computation rate is therefore less than its clock rate. Some computers use a special floating-point processor, called a coprocessor, that works in parallel to the CPU to speed up calculations using floating-point numbers. This coprocessor has become standard on many personal computer CPUs, such as Intel's Pentium chip.

Read more »

CENTRAL PROCESSING UNIT (CPU)

INTRODUCTION

Central Processing Unit (CPU), in computer science, microscopic circuitry that serves as the main information processor in a computer. A CPU is generally a single microprocessor made from a wafer of semiconducting material, usually silicon, with millions of electrical components on its surface. On a higher level, the CPU is actually a number of interconnected processing units that are each responsible for one aspect of the CPU’s function. Standard CPUs contain processing units that interpret and implement software instructions, perform calculations and comparisons, make logical decisions (determining if a statement is true or false based on the rules of Boolean algebra), temporarily store information for use by another of the CPU’s processing units, keep track of the current step in the execution of the program, and allow the CPU to communicate with the rest of the computer.

How A CPU Works
History Of CPU

Read more »

BUS NETWORK

Bus Network, in computer science, a topology (configuration) for a local area network in which all nodes are connected to a main communications line (bus). On a bus network, each node monitors activity on the line. Messages are detected by all nodes but are accepted only by the node(s) to which they are addressed. Because a bus network relies on a common data “highway,” a malfunctioning node simply ceases to communicate; it doesn't disrupt operation as it might on a ring network, in which messages are passed from one node to the next. To avoid collisions that occur when two or more nodes try to use the line at the same time, bus networks commonly rely on collision detection or Token Passing to regulate traffic.

Read more »

.

About

website counter