Showing posts with label computer. Show all posts
Showing posts with label computer. Show all posts

Conceptual System Design


Conceptual System Design

During the system analysis, the analysis of system data is very important. Analysis of data is made up of more than one level at the beginning (first level) and different ideas are used at each level. At first level, analyst develops a conceptual system design.

Since the conceptual design sets the direction for the management information system (MIS). It is vital that managers participate seriously and heavily at this stage. Conceptual design is sometimes called feasibility design, gross design or high level design.

The conceptual design phase takes as input.
1. A crisp statement of a management information requirement and
2. a set of management objectives for the MIS

In the conceptual design stage that the alternative overall MIS designs are conceived and the best one is selected by the system analyst in consultation with the top management. The feasibility of meeting the management objectives for the MIS is assessed showing how the system will work at the high level is drawn. Therefore, conceptual design is also known as gross design; high level becomes the basis for the detailed MIS design.

Hence, conceptual design is a pre-design for the detailed design. In fact, conceptual design is the “centerpiece” of the process. Only after conceptual design is completed, it can be sure that the MIS can successfully be constructed.

The conceptual design involves the following tasks.

1. Defining problems in more details.
2. Refining the management objectives to set system objectives.
3. Establishing system constraints.
4. Determining information needs and their sources.
5. Developing alternative designs and selection one from these various designs.
6. Document the conceptual design and preparing the report.

1. Define the problem-

There is no doubt that problems exists in any dynamic business. The most important is that what are usually lacking are clear definitions of the problems and the priority system on the basis of problem is the main solution. Therefore, management must take the first step in MIS design by formulating problems to be solved. The problem can be solved by the iterative process.

The goal for the business leads to the objectives of the general business. From the objectives, plans are derived. Each business objectives and business plans are derived. Each business objectives and business plans are associated with information needs. These Information needs are the problems to be solved by the MIS function. The statements of needs are enough for designing process.
1. Stating the information need.
2. Asking questions about that need.
3. Suggesting interpretation of that need.
4. Detailing the original statement.
5. Reviewing the more detailed statement of need with management.
These steps are repeated until the information needs and the problem to be solved are really understood. The process of problem refinement flows naturally into the system objectives.



2. Set System Objectives

Most of the time it is quite difficult to state objectives for systems that covers all the functional areas.
The manager must define the system objectives in terms of the importance of information demands and not in terms of the satisfaction of demands that are not related to an objective. System analyst tends to stress processing efficiency and staff and functional supervisors commonly believe that their objective is “to complete the required report in time for management use”. This view disregards the real objectives of the system design, management’s effectiveness.

The value of system lies in the benefits of the users. When we ask for the objectives, a college principal may reply,” provide quality education” and a government bureaucrat may say” provide more jobs for the unemployed”. Despite its difficulty being specific is necessary. System objectives should be expressed in terms of what managers can do after their information requirements have been met.
In summary, the first steps in systems design attempts to answer the question” what is the purpose of the system?” why it is needed? What is it expected to do? Who are the users what are their objectives?

3. Establish System Constraints

The iterative nature of the systems design process is easily understood when we consider the third step in the process-establishing constraints. It can also be called as problem boundaries or restrictions, constraints enable the designer to stipulate the conditions under which objectives may be attained and to consider the limitations that restricts the design. The two steps of setting objectives and establishing constraints may be considered together as one.
Constraints may be viewed as a negative limitation on systems design, there is a positive benefit also. Establishing constraints will help to ensure that the design is realistic.
Constraints may be classified as external or internal to the organization.

External Constraints
The external environment of the organization is concerned by the customer. Order entry, billing and other systems that interface with the customer’s needs in mind. If some outputs from the system are not acceptable to the customer, a definite limitation must be faced up.
The government imposes certain restrictions on the processing of data. That may be the need to maintain the security of certain classes of information to comply with law and regulation in the conduct of business (e.g. taxes, reporting).
Unions can also affect the operations of systems involving members in working conditions.
Suppliers are also an important group to be considered when designing information systems because these systems frequently interface with that group.

Internal Constraints
If top management support is not obtained for the systems concept and for the notion that computer based information systems are vital for management planning and control, the type of design effort cannot be implemented. A good environment for information systems must be set, and one essential requirement is the approval and support of the top management.

Organizational and policy considerations frequently set limit on objectives and modify an intended approach to design of the system. Company policies frequently define or limit the approach to systems designs.

Personnel needs and personnel availability are a major limiting factor in both the design and utilization of information systems. Computer and systems skills are among the most critical in the nation. The most significant constraint of all is the one concerning the people.

Cost is a major resource limitation. The cost to archive the objectives should be compared with the benefits to be derived.

Self-imposed restrictions are these placed on the design by the manager or the designer. The manager will also restrict the amount of time and effort devoted to investigation. To achieve the objective, the manager may have to scale down several requirements to make the system fit with other outputs, equipments or constraints.

4. Determining Information needs and sources

For a good system design, a clear statement of information needs is very important and necessary. Many organizations spend huge amounts on hardware and software to maintain existing systems or build sophisticated data banks, without first determining the real information needs of management: the information that can increase the ability of managers in critical areas such as problems, alternatives, opportunities and plans.
The optimum results cannot be achieved unless managers can provide the specifications for what they want out of an information system. The manager needs information for variety of reasons concerned with the management process. The type of needs at various times and various purposes depends largely upon two factors.
a) The personal managerial attributes of the individual manager and
b) The organizational environment in which decisions are made.
The information sources are important for determining information needs. The system may require external information or the internal.

5. Alternative conceptual designs and selecting one

The development of a concept of a system is a creative process that involves synthesizing knowledge into some particular pattern. The concept of an MIS would consist of the major decision points, patterns of information flow, channels of information and roles of managers and competitors. The concept is the sketch of the structures or skeleton of the Information System, which guides and restricts the form of the detailed design. If conceptual design is the skeleton, then detailed design is the flesh.

E.g. two teams of students are trying to do project on the tourist guide and contact information system. One concept produced is a sketch showing a detail about the particular places describing its culture, heritages along with the colleges. Hotels and trade. Where as another team produces a sketch of description of colleges along with the description of faculty and the fee structures on various needs.

It is obvious that each alternative concept of a system has advantages and disadvantages. Sometimes one concept will dominate all others by major criteria.

6. Document the best design

Sufficient information has been accumulated to begin a more detailed description of the system concept. This description includes essentially a flowchart or other documentation of the flow of information through the system, the inputs and the outputs.

The manager should be involved to the extent that the system provides the information required, the designer is concerned with the nature of the materials and equipment as well as with technical processing considerations.

Details to be worked out later by the designer will include exact instructions as what data are to be captured and when, the files are to be used, the details of how processing is to be done, what outputs will be generated by the system etc.

Read more »

Definition, Bus, Ring, Star Topology

The computers on LAN can be physically connected with the wires in different manner as the requirement of an organization or office. The manner in which the computers on the LAN are connected is known as LAN Topology. So, network topology is the physical layout of cabling for connecting computers on the network. It can be defined as the arrangement or connection pattern of computers on a LAN. A LAN topology describes how the computers are physically connected and how do they communicate on the network. It determines the data paths that may be used between any pair of nodes of the network. There are three basic network topologies. They are Bus topology, Ring topology and Star topology.

BUS TOPOLOGY

In a bus topology computers are arranged in the linear format. So, it is called Linear Topology. In this topology, all nodes are connected directly to the common cable with the help of T-connectors. The common cable is also known as also known as a network bus or trunk. The network bus acts as a backbone to the network.Many different lengths of co-axial cables are used in this type of topology. On the both side of the network bus (i.e. coaxial cable), BNC (Bayonet Naur Connector) jacks are connected. A T-connector is used to join segments of cables and computers. The BNC jack on each side of network bus is connected to the T-connector i.e. top the T-connector is connected to the NIC card of a Computer. The T-connectors connected to the last computers on both sides are attached with terminators.

In this network topology, the position of the server is not fixed i.e. can be any where on the network. When any node sends the data, the data passes on both directions in the form of packets through the bus and reaches to all the nodes. Since each data packet contains the data bits and the destination address, only the destination node accepts the data packets. The terminators at both end sides absorb the packets or signals travelling on the bus to prevent the bouncing of the signals which causes interference.

ADVANTAGES

a. Since each small segments of cables are joined to form a trunk or network bus it is easy to setup computers on the bus.

b. Since nodes are arranged in the linear form, it requires the less amounts of cables.

c. The coaxial cables used for networking are inexpensive and joining connectors on the cables is also easy.

d. Failure of any node does not affect other nodes on the topology.

e. Well suited for temporary networks (quick set

up).

DISADVANTAGES

a. If the backbone cable i.e. network bus has problem then the entire network fails.

b. Finding fault on this topology is not easy.

c. It provides limited flexibility for change, so adding or removing nodes in between is not easy.

d. The performance degrades when the

number of computers is more on the. so, it is not suitable for big size network.

RING TOPOLOGY

In a ring topology, all nodes are arranged in the shape of a circle (ring). Both ends of a cable are connected to the nodes so there is no any point like a bus topology. Since the both ends are connected to the nodes there is no any terminator in this topology. In this topology, many different lengths of co-axial cables are used according to distance of computers. In this topology each computer acts like a repeater that boosts an incoming signal before passing it on to the next computer.

In this topology, data or messages are transmitted in one direction either clockwise or anticlockwise. When any node sends a message or data, the message or data reaches to the first node on the circle. If the first node in the circle is the destination node then it absorbs the data or message otherwise it regenerates the signal and passes to another node on the loop and so on. If the message or data is not absorbed by any node then it is absorbed by the sender node.

ADVANTAGES

a. Since each node on the ring acts as a repeater, no any external repeater is required to boost up the signals.

b. It supports high data transmission. Rate.

c. It is easy to setup.

DISADVANTAGES

a. If any node or connecting cable fails the entire network does not work.

b. The diagnosis of the fault is difficult.

c. Since data or message reaches on the node in sequence, so addition of few nodes increases the communication delays.

d. It provides limited flexibility for change,

so adding or removing nodes in between is not easy.

STAR TOPOLOGY

Star Topology is the most popular topology used to connect computers and other network devices on the network. In a star topology all nodes are connected through a centrally located device in the form of star. But the shape of arrangement of computers is not necessarily to be star. The device whic

h connects computers on the network is either a hub or a switch. A hub or a switch has connecting ports or slots where the wires running from each node are connected. A twisted pair cable (specially unshielded twisted pair cable) is used for connecting a computer and a hub or switch. Each segment of UTP cable is attached with RJ-45 jacks. And one side of the UTP cable is connected to the node and another side is connected to the hub or switch. When any node sends data or message, the data or message reaches to the hub or switch and then to the targeted computer on the network.

ADVANTAGES

a. Computers can be added or removed easily without affecting the network.

b. If any of the workstation or the connecting cable fails, it does not affect the remaining portion of the network.

c. Fault detection in the star topology is easy.

d. It is easy to extend so it is suitable for a large network.

e. It is one of the reliable network topology.

DISADVANTAGES

a. Since each node is required to connect with the centralized hub or switch more cables are needed which increases the cost of installation.

b. The entire network fails if there is any problem on the hub or switch.

c. In comparison to Linear and Ring topologies, it is little expensive as it requires more length of cables and other controlling devices.

Read more »

Definition of Computer Virus, Protection


Computer viruses are the software programs that have the ability to clone itself and can operate without the knowledge or desire of the computer user. In other words, a computer virus is a program designed to spread itself by first infecting executable files or the system areas of hard and floppy disks and then making copies of itself. Computer virus can transfer from different

means to a computer without the knowledge and permission of the user and they can hide themselves in other files. Whenever a host file or program is used, the virus become active and performs destructive tasks such as dislocating, deleting and changing contents of files. It infects data or program every time the user runs the infected program and it takes advantages and replicates itself. It is the intellectual destructive creation of computer programmer.

In 1949, Dr. John Von Neumann introduced the concept of replicate computer program. The first replicating program named “Creeper” was reported during 1970 in the network system of American department of Defense. In 1983, an American electronic engineer ‘Fred Cohen’ had used the word “Computer Virus” in his research paper for the program that replicates andprevents other programs to be executed. In 1987, two Pakistani brothers , Amjad and Basti released the first IBM virus “C-Brain” to stop illegal reproduction of software developed from Alvi’s Brain Computer Shop. An Indonesian programmer released the first antivirus software in 1988 to detect the C-Brain virus. This antivirus software could remove C-Brain from a computer and immunized the system against fur

ther Brain attacks. After this event, people started to have much interest in viruses and various viruses have started to be produced.

The number of computer viruses is increasing day by day. The nature of virus varies from each other. Virus spread from computer to computer through electronic bulletin boards, telecommunication systems, and shared floppy disks, pen drives, compact disks and the Internet. Viruses are created by computer programmers for fun, but once they began to spread they take on a life of their own. Antivirus software are developed to protect from computer virus.

PURPOSE OF CREATING COMPUTER VIRUS

1. To stop the software privacy. Software

can be easily copied from one computer to another computer. In order to stop software piracy, the programmers of the software themselves create computer viruses.

2. To entertain the users by displaying interesting messages or pictures.

3. To steal data and information.

4. To remind the incidents that happened at different time.

5. To destroy data, information and files.

6. To expose their programming ability.

7. Computer viruses are made in order to earn the money.

Computer viruses activate when the infected files or programs are used. Once a virus is active it may replicate by various means and tries to infect other files or the operating system. When you copy files or programs from a infected computer, the viruses also transfer along with files or programs to the portable disk which in turn transfers viruses to another computer whenever it is used. So, mostly the computers get infected through the external sources. The most common ways through which viruses spread are:

· Sharing of infected external portable disk like floppy disk, pen drive or compact disk.

· Using pirated software.

· Opening of virus infected e-mail messages and attached files.

· Downloading files or programs from the web

site, which are not secured.

· Exchanging of data, information or files over a network.

The number of viruses is increasing daily and each virus possesses different characteristics. It is very difficult to know whether a computer is infected with viruses or not. You may see the following symptoms, if a computer is infected with computer viruses.

· Programs take more time to load, fail to load or hang frequently.

· Unexpected messages or images appear su

ddenly on the screen.

· Displays unusual error messages or encounters errors frequently.

· Missing of files or appearing of unexpected files.

· Displaying low memory message frequently.

· Programs open automatically without giving instruction.

PROTECTION FROM VIRUS

We have already known that, viruses are harmful to our computers. They affect our computer systems. Virus can damage our important files and programs. They make our computer slow. Similarly, viruses create several effects to our computers and they irritate the users frequently. So, protection and prevention of our computer from viruses is necessary. If we follow some tips, we can prevent computer from viruses.

Some general tips on prevention and protection from virus infections are as follows:

1. Install anti-virus software from a well known, reputable company and use it regularly.

2. Update the Anti-virus software frequently in order to get the latest virus definition and scan the hard disk using latest virus definition because new viruses come out every single day.

3. Install an ‘on access’ scanner and configure it to start automatically each time you boot your computer system. This will protect your system by checking for viruses each time your computer accesses an executable file.

4. Virus scans any programs or other files that may contain executable code before you run or open them, no matter where they come from. There have been the cases of commercially distributed floppy disks, pen drives and CD-ROMs spreading virus infections.

5. If your E-mail or news software has ability to automatically execute Java Script, word macros or other executable, code contained in or attached to a message, it strongly recommended that you should disable this feature.

6. Be extremely careful about accepting programs or other files during on-line chat session. This seems to be one of the more common means that people wind up with virus or Trojan horse problem.

7. Do backup your entire system on a regular basis. Because some viruses may erase or corrupt files on your hard disk and recent backup data can be recovered.

8. Before using the pen drives of others, check it whether it is virus infected or not. First scan and then only open it.

9. Do not use pirated software.

10. Lock the computer system using password to prevent your computer from being used by others.

11. Do not download any programs from Internet unless you are confirmed they are virus free.

12. Be careful! While checking mail having attached documents.

Read more »

HISTORY OF CPU

EARLY COMPUTERS

In the first computers, CPUs were made of vacuum tubes and electric relays rather than microscopic transistors on computer chips. These early computers were immense and needed a great deal of power compared to today’s microprocessor-driven computers. The first general purpose electronic computer, the ENIAC (Electronic Numerical Integrator And Computer), was introduced in 1946 and filled a large room. About 18,000 vacuum tubes were used to build ENIAC’s CPU and input/output circuits. Between 1946 and 1956 all computers had bulky CPUs that consumed massive amounts of energy and needed continual maintenance, because the vacuum tubes burned out frequently and had to be replaced.


TRANSISTOR

A solution to the problems posed by vacuum tubes came in 1948, when American physicists John Bardeen, Walter Brattain, and William Shockley first demonstrated a revolutionary new electronic switching and amplifying device called the transistor. The transistor had the potential to work faster and more reliably and to consume much less power than a vacuum tube. Despite the overwhelming advantages transistors offered over vacuum tubes, it took nine years before they were used in a commercial computer. The first commercially available computer to use transistors in its circuitry was the UNIVAC (UNIVersal Automatic Computer), delivered to the United States Air Force in 1956.


THE INTEGRATED CIRCUIT (IC)

Development of the computer chip started in 1958 when Jack Kilby of Texas Instruments demonstrated that it was possible to integrate the various components of a CPU onto a single piece of silicon. These computer chips were called integrated circuits (ICs) because they combined multiple electronic circuits on the same chip. Subsequent design and manufacturing advances allowed transistor densities on integrated circuits to increase tremendously. The first ICs had only tens of transistors per chip compared to the millions or even billions of transistors per chip available on today’s CPUs.

In 1967 Fairchild Semiconductor introduced a single integrated circuit that contained all the arithmetic logic functions for an eight-bit processor. (A bit is the smallest unit of information used in computers. Multiples of a bit are used to describe the largest-size piece of data that a CPU can manipulate at one time.) However, a fully working integrated circuit computer required additional circuits to provide register storage, data flow control, and memory and input/output paths. Intel Corporation accomplished this in 1971 when it introduced the Intel 4004 microprocessor. Although the 4004 could only manage four-bit arithmetic, it was powerful enough to become the core of many useful hand calculators at the time. In 1975 Micro Instrumentation Telemetry Systems introduced the Altair 8800, the first personal computer kit to feature an eight-bit microprocessor. Because microprocessors were so inexpensive and reliable, computing technology rapidly advanced to the point where individuals could afford to buy a small computer. The concept of the personal computer was made possible by the advent of the microprocessor CPU. In 1978 Intel introduced the first of its x86 CPUs, the 8086 16-bit microprocessor. Although 32-bit microprocessors are most common today, microprocessors are becoming increasingly sophisticated, with many 64-bit CPUs available. High-performance processors can run with internal clock rates that exceed 3 GHz, or 3 billion clock pulses per second.


CURRENT DEVELOPMENTS

The competitive nature of the computer industry and the use of faster, more cost-effective computing continue the drive toward faster CPUs. The minimum transistor size that can be manufactured using current technology is fast approaching the theoretical limit. In the standard technique for microprocessor design, ultraviolet (short wavelength) light is used to expose a light-sensitive covering on the silicon chip. Various methods are then used to etch the base material along the pattern created by the light. These etchings form the paths that electricity follows in the chip. The theoretical limit for transistor size using this type of manufacturing process is approximately equal to the wavelength of the light used to expose the light-sensitive covering. By using light of shorter wavelength, greater detail can be achieved and smaller transistors can be manufactured, resulting in faster, more powerful CPUs. Printing integrated circuits with X-rays, which have a much shorter wavelength than ultraviolet light, may provide further reductions in transistor size that will translate to improvements in CPU speed.

Many other avenues of research are being pursued in an attempt to make faster CPUs. New base materials for integrated circuits, such as composite layers of gallium arsenide and gallium aluminum arsenide, may contribute to faster chips. Alternatives to the standard transistor-based model of the CPU are also being considered. Experimental ideas in computing may radically change the design of computers and the concept of the CPU in the future. These ideas include quantum computing, in which single atoms hold bits of information; molecular computing, where certain types of problems may be solved using recombinant DNA techniques; and neural networks, which are computer systems with the ability to learn.

Read more »

HOW A CPU WORKS

CPU FUNCTION

A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants.

As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order.


BRANCHING INSTRUCTIONS

The program counter in the CPU usually advances sequentially through the instructions. However, special instructions called branch or jump instructions allow the CPU to abruptly shift to an instruction location out of sequence. These branches are either unconditional or conditional. An unconditional branch always jumps to a new, out of order instruction stream. A conditional branch tests the result of a previous operation to see if the branch should be taken. For example, a branch might be taken only if the result of a previous subtraction produced a negative result. Data that are tested for conditional branching are stored in special locations in the CPU called flags.

CLOCK PULSES


The CPU is driven by one or more repetitive clock circuits that send a constant stream of pulses throughout the CPU’s circuitry. The CPU uses these clock pulses to synchronize its operations. The smallest increments of CPU work are completed between sequential clock pulses. More complex tasks take several clock periods to complete. Clock pulses are measured in Hertz, or number of pulses per second. For instance, a 2-gigahertz (2-GHz) processor has 2 billion clock pulses passing through it per second. Clock pulses are a measure of the speed of a processor.



FIXED-POINT AND FLOATING-POINT NUMBERS


Most CPUs handle two different kinds of numbers: fixed-point and floating-point numbers. Fixed-point numbers have a specific number of digits on either side of the decimal point. This restriction limits the range of values that are possible for these numbers, but it also allows for the fastest arithmetic. Floating-point numbers are numbers that are expressed in scientific notation, in which a number is represented as a decimal number multiplied by a power of ten. Scientific notation is a compact way of expressing very large or very small numbers and allows a wide range of digits before and after the decimal point. This is important for representing graphics and for scientific work, but floating-point arithmetic is more complex and can take longer to complete. Performing an operation on a floating-point number may require many CPU clock periods. A CPU’s floating-point computation rate is therefore less than its clock rate. Some computers use a special floating-point processor, called a coprocessor, that works in parallel to the CPU to speed up calculations using floating-point numbers. This coprocessor has become standard on many personal computer CPUs, such as Intel's Pentium chip.

Read more »

.

About

website counter