Wednesday, December 12, 2007

IT Concept

INFORMATION TECHNOLOGY:

Information technology is the use of modern technology to aid in storage, processing, analysis and communication of information.

We can simply define that the information about any technology is called information technology.

Often the IT typically refers to the equipment such as computers, data storage devices, networks and also communication devices. Technology in the broad sense is the application of modern communication and computing technologies to the creation, management and use of the knowledge.

Information technology means the use of the hardware, software, services, and supporting infrastructure to manage and deliver information using voice, data and video.

More specifically, information Technology can mean:

v Managing a computer network.

v Creating Web pages.

v Producing Video digitally.

v Designing Computer System.

v Designing 3-D artwork.

v Coding Software.

v Managing projects and budgets.

COMPUTER

The word “computer” is derived from the word”Computer” which means to compute or calculate. With this meaning a computer is considered to be a calculating device that can perform the arithmetic operation at enormous speed.

Now-a-days a computer system can perform more then calculations. The modern definition of the computer can be as:

“A computer is an electronic device that operates upon the information or data. A computer can store, process, and retrieve data as machine that takes input in the form of data, do processing on that data, and then gives the results.”

CHARACTERISTICS OF COMPUTERS

Now-a-days almost every one want to work onto the computer to solve his or her problem. Computer is used that much because of its characteristics which make it useful. Following are some of the characteristics due to which the computer is considered to be so useful and helpful.

· Speed.

· Accuracy.

· Diligence.

· Versatility.

· Power of remembering.

· No feelings.

SPEED

The first and main reason of using a computer is its speed. A computer is a very fast device. It can perform in a few seconds the amount of work that a human being can do in an entire year. Different processors with fast speeds are introduced in market. Everybody wants to buy a fast computer.

ACCURACY

This is the second characteristic of the computer due to which it is used so much. The accuracy of a computer is consistently high.

Errors may occur in a computer, but these are mainly due to human beings rather than technological weakness. So if the result of the computer is not correct, it is due to the mistake of human beings.

DELIGENCE

Human beings get bored from the monotony. But the case is not same with the computer system. Unlike the human beings, a computer is free from monotony, tiredness, lack of concentration etc., and hence can work for hours without creating any errors.

VERSATILITY

Versatility is one of the most wonderful things about the computer. One moment it is preparing the results of a particular examination, the next moment it is busy preparing electricity bills. Briefly, a computer I capable of performing almost any task provided that the task can be reduced capable o performing almost any task provided.

POWER OF REMEMBERING

A computer can store and recall any amount of information because of its secondary storage capability. Every piece of information can be retained as long as desired by the user and can be recalled as and when required. Even after several years, the information recalled will be accurate as on the day when it was fed to the computer.

NO FEELINGS

Computers have no feelings and no instincts because they are machines. Based on our feelings, taste, knowledge, and experience, we often make judgments in our day-to-day life. But computers can not make such judgments on their own. Their judgments is based the instructions given to them in the form of programs that are written by us. They are as good as man makes and uses them.

HISTORY OF COMPUTERS

ABACUS (5000 B.C)

It was the first calculating device. It was developed about 5000 B.C years ago in Babylonia. It is also known as “SORBAN”. It is interesting to note that it is still used in now-a-day in toys.

NAPIER’S BONE (1600 A.D)

Another manual calculating device was Napier’s bone. It was developed by John Napier in 1600 A.D. John was mathematician. He also introduced logarithms. Napier’s bone construct from strips of bones.

PASCALINE (1642 A.D)

It was the first mechanical adding machine. It was invented by 19 years old Blaise Pascal in 1642 A.D. He was a French mathematician. The father of the Pascal was tax collector.

LEIBNITZ’S MACHINE (1694 A.D)

In 1694 A.D baron Gottfried Wilhelm Von Leibniz of Germany invented the first calculator of multiplication and division. It also finds the square root.

JACQUAR’S LOOM (1801 A.D)

Joseph jacquard introduced the punched card in 1801 A.D. Joseph was a French textile manufacturer developed a system which used punched card for controlling the weaving loom.

DIFFRENCE ENGINE AND ANALYTICAL ENGINE (1822 A.D)

Charles Babbage, a 19th century professor at Cambridge University, is considered to be the father of digital computers. Like Pascal and Leibniz, this Englishman was also a mathematician and wants to develop a machine that could perform calculation. In 1822 he developed the “Difference Engine”. In 1823 he started to improve the Difference Engine. Actually he wanted to develop a complete calculating machine (Analytical Engine). But he failed because the standard of engineering was not outstanding; therefore the machine was never completed.

MARK-1(1937 A.D)

It was the first electro mechanical device. It was designed by Howard A. Aiken of Harward University in Collaboration with IBM (International Business machines) its design was based on the technique already developed for punched card machine. This project was started in 1937 and completed in 1944. Mark-1 was approximately 55 feet long and 8 feet high.

GENERATION OF COMPUTERS

There are total five generations of the computers. Which depend upon the period of developments in this field?

  • First Generation (1942-1959)
  • Second Generation (1959-1965)
  • Third Generation (1965-1971)
  • Fourth Generation (1971-Present)
  • Fifth Generation (Future)

First Generation (1942-1959)

In first generation of computers vacuum tube was used. In 1946 Professor J.Presper and Jhon Mauchly developed the first vacuum computer ENIAC (Electronic Numeric Integrated and Calculator). EINAC has no idea of stored program concept. In 1946 Br. John Von Neumann started work on EDVAC (Electronic Discrete Variable Automatic Computer) which used the concept of storing the program. Its development was completed in 1952. Before the EDVAC in 1949 the EDSAC (Electronic delay Storage automatic Computer) was developed by Maurice an Englishman. In 1951 Eckert and John developed UNIVAC (Universal Variable Automatic Computer) which was the first digital computer. The first Business Oriented computer UNIVAC-1 was developed by General Electronic Corporation in 1954. In the first generation of computers the punch card were used for getting and feeding information. The use of the vacuum tube in computer is usually regarded as the beginning of the computer age.

Second Generation (1959-1965)

In 1948 the transistors were developed in the Bell Labs which formed the basis for the second generation of the computers. Through the use of transistors the second generation these computers were much faster, more reliable and more versatile than the first generation of computers. Like the first generation of computers punch cards and magnetic tape used for input of the data. In the second Generation high level languages were developed like FORTRAN, COBOL and BASIC etc. The typical computers are IBM 650, BURROUGHS 220 etc.

Third Generation (1965-1970)

The use of ICs (Integrated Circuits) signified the beginning of third generation of computers. Again the third generation computers were smaller, more efficient and more reliable than their predecessor were. In 1958 the ICs were developed by Jack St. Clair and Robert Noyce. The earliest ICs using a technology now called SSI (Small Scale Integration) could pick up 10 or 20 Circuits. By the late 1960 the engineers had achieved MSI (Medium Scale Integration) which placed up to between 20 and 200 transistors on a chip. By 1969 as many as 1000 transistors could be build on one chip of silicon.

Fourth Generation (1970-Present)

The significant distinction of the 4th generation of the computers is the development of LSI (Large Scale Integration). With VLSI (Very Large Scale Integration), they could place the equivalent of more than 5,000 transistors on a single chip. Similarly in the MID 70s the first microprocessor the Intel 4004 was developed. In MID 70s it was followed by VLSI (Very Large Scale Integration), the incorporation of several thousand transistors on a single chip. This creation was followed by the creation of faster, more powerful microprocessors, such as the Intel 80386.

Fifth Generation (Future)

Some say that the creation and use of a computer with AI (Artificial Intelligence) will present the next step. Although expert systems are already being used for the specialized applications, true AI, or computers that can think are still concepts in the mind of tomorrow.

BASIC COMPNENTS/ELEMETNS OF COMPUTER

The internal architectural design of computers differs from one system model to another. The basic organization of the system remains same.

The following are the four main component of the computer system.

  • Input Unit.
  • Output Unit.
  • Storage Unit.
  • CPU (Central Processing Unit).
    • CU (Control Unit).
    • ALU (Arithmetic and Logic Unit).

INPUT UNIT

Data and instructions must enter the computer system before any computation can be performed on the supplied data. The input unit that links the external environment with the computer system performs this task. Data and instructions enter into in the input unit in the forms that depend upon the particular device that is used. For example the data is entered from the keyboard in a manner similar to the typing.

An input unit performs the following three functions.

  1. It accepts the list of instructions and data from the outside world.
  2. It converts these instructions and data in computer acceptable form.
  3. It supplies the converted instructions and data to the computer system for further processing.

OUTPUT UNIT

The job of the output unit is the reverse of that of an input unit. It supplies information and results of computations to the outside world. Thus it links the computer with the external environment.

In short the following function performed by the output unit.

  1. It accepts the result from the computer.
  2. It converts these coded results to human acceptable form.
  3. It supplies the converted result to the out side world.

STORAGE UNIT

The storage unit primary/main memory of the computer system is designed to provide space for storing data entered through the input unit, provide space to store the intermediate results and also space for the final results.

In short the specific function of the storage units is to store:

  1. All the data to be processed and the instruction required for the processing.
  2. Intermediate result of processing.
  3. Final result of processing before this result s is released to an output device.

CPU (Central Processing Unit)

The main unit inside the computer is the CPU. This unit is responsible for all events inside the computer. It controls all the internal and external devices, perform arithmetic and logic operations.

The CPU consists of several units. The two main units are:

v ALU (Arithmetic & Logic Unit)

v CU (Control Unit)

ALU (Arithmetic & Logic Unit)

The arithmetic and logic unit is the part where actual computations take place. It consists of circuits which perform arithmetic operations (e.g. addition, substation, multiplication, division over the data received from the memory and capable to compare numbers(less than, equal to, greater than).

CU(Control Unit)

The control unit directs and controls the activities of the internal and external devices. It interprets the instructions fetched into the computer, determines what data, if any, are needed, where it is stored, where to store the results of the operations, and send the control signals to the devices involved in the execution of the instructions.

TYPES OF COMPUTERS ACCORDING TO WORKING

There are three basic types of computer according to working.

  • Analog Computer.
  • Digital Computer.
  • Hybrid Computer.

ANALOG COMPUTER

Analog computer is used to process analog data. Such type of data include temperature, pressure, speed, weight, voltage, depth etc. these quantities are continuous and having an infinite variety of values.

It measures the change in some physical quantity. E.g. speedometer of a car measures speed, the change of the temperature is measured by a thermometer, the weight is measured by weight machine.

DIGITAL COMPUTER

A digital computer as its name implies works with digits to represent numerals, letters, and digits to represent numerals, letters or other special symbols.

A digital computer can be used to process numeric as well as non-numeric data. It can perform arithmetic operations like (Addition, Subtraction, multiplication, division) and also logical operations. They are general purpose in use. Most of the computers in use today are digital computers. The most examples of the digital computers re accounting machines and calculators.

HYBRID COMPUTER

A hybrid is a combination of digital and analog computers. It combines the best features of both types of computers. I.e. it has the speed of the analog computer and memory and accuracy of the digital computer. Hybrid computers are mainly used in specialized applications where both kind of data need to be processed. For example a petrol pump contains a processor that converts fuel flow measurement into quantity and price.

TYPES OF COMPUTER ACCORDING TO SPEED AND SIZE

There are four types of computer according to speed and size.

  • Super Computer.
  • Mainframe Computer.
  • Mini Computer.
  • Micro Computer.

SUPER COMPUTER

The super computers are the largest, fastest and most expensive. The price of the super computer ranges from $5 to $20 million dollars. There world wide sale potential is only 30-50 computers per annum. These computers are 50,000 times faster then the micro computer. They are five times faster then the large mainframe computers. 60 miles of wiring is used while constructing a super computer. They can calculate 400 million number s per second where as mainframe can 10 millions numbers. There accuracy is up to 14 decimal places. It can process 1billion instruction in a second. 1000 individual PCs can be attached to a super computer. They are used in the oil exploration, weather prediction, generation of the film imagery etc. Examples of the super computer are CRAY-1 and CYBER-205.

MAINFRAME COMPUTER

They are less expensive, less powerful and slower then the super computer. Still they are faster then the other types of computer. They can process 10 million numbers per second. The cost of the normal mainframe computer ranges from several hundred dollars to many million dollars. The IBM introduced the families of mainframes (small, medium, large). 1000 workstation can be attached to a normal mainframe. The memory capacity is up to 100 MB and the access time is 15 nanosecond. These computers are mainly used for the networking purposes. The application areas of the mainframe computers are banks, hospitals, universities etc. Examples of mainframe computers are IBM-4381, ICL-2900, and NEC 610 etc.

MINI COMPUTER

Mini computers are also known as midsize or low-end mainframe computers. They are less expensive and smaller then the mainframe computers. They are designed for the computerization of data for research, industrial process and small business application. They vary in size. The size of the mini computer prevents it from being portable but it can be moved more easily than a mainframe. Time sharing, batch processing and online processing are available on the mini computers. The memory capacity of the mini computer is 32MB and access time as fast as 75 nanosecond.

The examples of mini computers are PRIME-9755, VAX-8650, and IBM System 36 etc.

MICRO COMPUTER

A micro computer is the smallest, less expensive of all the computers. They generally fall into the price of $100 to $10,000.The word micro refers mainly to the physical size and circuitry. It a small computer and originally it had rather limited capabilities as compared to the large mainframe computers. Now the microcomputer is more powerful than the early mainframe. The memory capacity of the micro computer is 32 MB and access time is 100 nanoseconds. The micro computers are easily accommodate-able on a table and thus had the name desktop. Examples of the micro computers are IBM, Apple, Compaq, Radio Shack, Commodore, Atari etc.

IBM compatibilities: 286,386,486,586 and now Pentium IV as the latest.

The desktop, laptop, and hand held computers fall into this category.

TYPES OF COMPUTER ACCORDING TO PURPOSE

There are two types of computer according to the purpose.

  • General Purpose Computer.
  • Special Purpose Computer.

GENERAL PURPOSE COMPUTER

Most computers in use today are general purpose computers. Those built for a great variety of processing jobs. Simply by using a general purpose computer and different software, various tasks can be performed, including writing and editing, manipulating facts in a database, tracking manufacturing inventory, making scientific calculations to even controlling an organization’s security system, electricity consumption etc.

SPECIAL PURPOSE COMPUTER

A special computer as the name implies is designed to perform a specific operation and usually satisfies the needs of a particular type of problem. Special purpose computers are also known as dedicated computers, because they are designed to perform a particular job. Such a computer would be useful in games, control traffic lights, weather prediction, satellite tracking or programming a video cassette recorder. While a special purpose computer may have many of the same features found in a general purpose computer, its applicability to a particular problem is a function of its design rather than to a stored program.

COMPUTER CODES

People use alphabets both in capital and small case, numbers and so many other characters. Computer deal with data converted into the simplest form that can be processed magnetically pr electronically that is binary form.

To store and process data in binary form, a way of representing character numbers and other symbols had to be developed. In other words coding schemes had to be devised.

Main coding schemes are:

  • BCD code.
  • EBCIDIC code.
  • ASCII-7 & ASCII-8.

BCD (Binary Coded Decimal) Code

The binary coded decimal (BCD) code is one of the early memory codes. It is based on the ideas of converting each digit of a decimal number into its binary equivalent. Converting 49 into BCD produces the following result.

49 = 0100 1001

4 9

When 4 bits are used altogether 16 combinations are possible.

Instead of using 4 bits to represent a character the designers commonly use 6-bits to represent a character in BCD. With 6 bits, it is possible to represent 64 different characters. The octal number system is used as shortcut for notation for BCD codes.

EBCIDIC (Extended Binary Coded Decimal Interchange) Code

The major problem with BCD code is that only 64 different characters can be represented in it. This is not sufficient for providing decimal number (10), lower case letters (26), capital letters (26), and a fairly large number of other special characters (28+).

Hence the BCD code was extended from 6 bit code to an 8-bit code. The resulting code is called EBCIDIC. In this code it is possible to represent 256 different characters. The hexadecimal number system is used a shortcut notation for EBCIDIC code. IBM mainframe computers and other system use the EBCIDIC.

ASCII 7 /8(American Standard Code for Information Interchange)

Another computer code that is very widely used is the American Standard Code for Information Interchange. ASCII has been adopted by several American computer manufactures as their computer internal code.

ASCII is of two types. ASCII-7 & ASCII-8.

ASCII-7 is 7 bit code that allows 128 different characters.

ASCII-8 is an extended version of the ASCII-7. It is an 8-bit code that allows 256 different characters rather that 128. The hexadecimal number us used as a shortcut for ASCII-7 & ASCII-8.

The Character with there EBCDIC CODE,ASCII CODE (7BIT & 8 BIT ),BCD CODE are shown in the following figure

Character EBCDIC 7 bit- ASCII 8bit- ASCII BCD

0 11110000 0110000 00110000 0000

1 11110001 0110001 00110001 0001

2 11110010 0110010 00110010 0010

3 11110011 0110011 00110011 0011

4 11110100 0110100 00110100 0100

5 11110101 0110101 00110101 0101

6 11110110 0110110 00110110 0110

7 11110111 0110111 00110111 0111

8 11111000 0111000 00111000 1000

9 11111001 0111001 00111001 1001

A 11000001 1000001 01000001 1010

B 11000010 1000010 01000010 1011

C 11000011 1000011 01000011 1100

D 11000100 1000100 01000100 1101

E 11000101 1000101 01000101 1110

F 11000110 1000110 01000110 1111

G 11000111 1000111 01000111

H 11001000 1001000 01001000

I 11001001 1001001 01001001

J 11010001 1001010 01001010

K 11010010 1001011 01001011

L 11010011 1001100 01001100

M 11010100 1001101 01001101

N 11010101 1001110 01001110

O 11010110 1001111 01001111

P 11010111 1010000 01010000

Q 11011000 1010001 01010001

R 11011001 1010010 01010010

S 11100010 1010011 01010011

T 11100011 1010100 01010100

U 11100100 1010101 01010101

V 11100101 1010110 01010110

W 11100101 1010111 01010111

X 11100111 1011000 01011000

Y 11101000 1011001 01011001

Z 11101001 1011010 01011010

INPUT DEVICES

Input is the process of entering and translating incoming data into machine readable form so that it can be processed by the computer. An input device is a device through which data are entered and transformed into machine-readable form.

Different types of input devices are used.

  • Keyboard
  • Mouse
  • Trackball.
  • Joystick.
  • Touch Screen.
  • Track Point
  • Touch Pad
  • Light Pen.
  • Scanner.
  • BCR
  • OCR.
  • OMR
  • MICR.
  • Graphic Tablet or Digitizer
  • Voice Recognition Devices.
  • Machine Vision System.

KEYBOARD

The keyboard is an essential input device. It has typewriter like key with some additional keys. There are different types of keyboard used with the computer system have a certain number of features in common.

v Standard typewriter keys.

v Function Keys.

v Special Purpose Keys.

v Cursor movement Keys.

v Numeric Keys.

MOUS

A mouse a handheld device connected to the computer by a small cable that actually looks a little bit like a mouse. The mouse, which has a ball on its underside, is rolled on a flat surface. The rolling movement causes a corresponding movement on the screen processing various keyboard keys can also move the cursor.

TRACK BALL

The trackball is like an upside down mouse. It has a ball on the top and one can roll the ball directly with the hand. It is normally used with the laptops.

JOYSTICK

A joystick is an input device that enables one to move cursor, page, object and pictures from one point to another on the screen. A joystick uses a level to control the position of the cursor. It performs the same function as of an arrow keys in the keyboard but it is fast and gives you eight direction movement as compared to four.

TOUCH SCREEN

The touch screen registers input when a finger or the other object comes in contact with the screen. Two touch screen techniques involve infrared beams and ultrasonic acoustic waves.

Touch screen have long been used in military applications. Today because they have been less expensive, touch screens are found in many applications.

TRACK POINT

Some potable computer provides a mouse substitute called a track point, a button that produces from the middle of the keyboard. With a track point, one imitates mouse movements by pushing the button from side to side.

TOUCH PAD


many laptop computers use a touchpad in front of the keyboard. One can move the finger on the pad to move the cursor on the screen. When one wants to click, they can tap the pad or use the buttons in front of pad.

LIGHT PEN

The light pen look like an ordinary pen, but its tip is light sensitive detector. The light pen uses a light-sensitive photoelectric cell to signal screen positions to the computer.

Light pens are frequently used are graphic designers, illustrators and drafting engineers.

SCANNER

Scanners are input devise. A scanner works very much like a photocopier, but a scanner digitizes the information into a computer, not onto the other piece of paper. In a process called imaging, a scanner converts a drawing, a picture, on any document into computer recognizable form.

BCR (Bar Coded reader)

Data coded in the form of light and dark lines or bars are known as bar coded. Bar codes are used particularly by the retail trade for labeling goods.

BCR is a device used for reading bar coded data. Bar code reading is performed by laser beam scanner, which is lined to the computer.

OCR (Optical Character Recognition)

In 1950’s a number of OCR input devices were developed. It is device that utlalizes light beams to read alphanumeric characters. The printed characters are examined by passing them under a strong light and a lens system which differentiate the non inked area from the inked area and a logical system which attempts to determine which of the possible character being examined.

OMR (Optical Mark Recognition)

OMR employs mark sensing, one of the simplest forms of the optical recognition, to scan and translate, based on its location, a series of pen or pencil marks into a computer readable form.

The optical mark technology is widely used for scoring examination and inputting raw data recorded on question papers.

MICR (Magnetic Ink Character Recognition)

MICR is the interpretation by a computer of a line of characters written in special ink. Human can read these characters as well. Magnetic ink character recognition devices were developed to assist in banking industry to process huge volumes of checque.

Graphic Tablet or Digitizer

It is a very sophisticated device that is used in the production of pictorial images. Digitizer is used in design and engineering business such as those that develop aircraft or computer chips.

VOICE RECOGNITION DEVICES

Voice recognition is one of the newest, most complex input techniques used to interact with the computer. It requires a microphone and special voice recognition software. These devices can take place of keyboards for text entry. In addition some operating systems enable users to give vocal commands.

MACHINE VISION SYSTEM

The simulation of human senses, especially vision, is extremely complex. Computers need cameras for their eyesight. A camera digitizes the images of the entire object then stores the images into computer, database. When a new image is seen the system compares the digitized images with its database. The computer identifies the image by matching the structure.

OUTPUT DEVICES

Output is the process of translating data from machine readable form into a form understandable to human. And output device is a device that allows a computer to communicate information to human by accepting data from the computer and transforming them into a usable form.

The output that is obtained from a computer system can be categorized into two main forms.

  • Soft Copy.
  • Hard Copy.

SOFT COPY

Generally refers to the output displayed on a computer screen. It is a transparent form of output and is lost when the computer is turned off. But if the data needed to create a soft copy have been saved on disk or tapes, the soft copy can be reproduced on the screen any time.

SOFT COPY DEVICES

The monitor is the main soft copy device.

MONITOR

A monitor is a television like device used to display data or information. Monitors allow the users to view the results of processing. There are two main types of monitors.

  • CRT (Cathode Ray Tube).
  • LCD (Liquid Crystal Display).

CRT (Cathode Ray Tube)

On this type of screen, a data image is produced by an electron beam that moves across a phosphorous coated screen. The most common type of CRT has a display screen of 25 lines of 80 characters each. Other sizes are also available.

The CRT’s screen display is made of small picture elements, called pixels. The smaller the pixels, the better the image clarity.

LCD (Liquid Crystal Display)

The most common type of flat-panel display is the liquid-crystal display (LCD). LCD uses a clear liquid chemical trapped in tiny pocket of liquid is covered both front and back by very thin wires. When a small amount of current is applied to both wires, a chemical reaction turns the chemical a dark-color. Thereby blocking light.

The advantages of the LCD monitors are its low power consumption, low cost and small size.

HARD COPY

Hard copy refers to the information that has been on paper etc. It can be read immediately or stored and read later.

HARD COPY DEVICES

Among a wide variety of hard copy output devices printers and plotters are used the most.

PRINTER

There are two main types of printers.

Impact Printer.

Non-impact Printer.

IMPACT PRINTERS

An impact printer makes contact with the paper. It usually forms the print image by pressing an inked ribbon against the paper using a hammer or pins.

The following are some impact printers.

  • Dot Matrix Printer.
  • Daisy Wheel Printer.
  • Line Printer.

DOT MATRIX PRINTER

The dot-matrix uses print heads containing from 9 to 24 pins. These pins produce patterns of dots on the paper to form the individual characters. The 24-pin dot-matrix printer produce more dots than a 9-pin dot-matrix printer, which results in much crisper, clearer characters. The general rule is the more the pins, the crisper the letters.

Dot-matrix printers are in-expensive and typically print at the speed of 100-600 dots per second.

DAISY WHEEL PRINTERS

In order to get the quality of type found on typewriter, a daisy wheel impact printer can be found. The daisy wheel printer is so called because the print mechanism looks like a daisy; at the end of each “petal” is a fully formed character which produces solid line print. A hammer strikes a “petal” containing a character against the ribbon and the character points on the paper. Its speed is typically 25-55 characters per second.

LINE PRINTER

In business where enormous amount of material are printed, these users need line at a time printers. Line printers or line-at-a-time printers use special mechanism that can print a whole line at once. They can print a speed of 1200-1600 lines per minute. I.e. Drum, chain and band printers are line at a time printers.

NON-IMPACT PRINTER

Non-impact printer do not use a striking device to produce characters on the paper; and because these printers do not hammer against the paper. They are much quicker.

The following the main types of the non-impact printers.

  • Ink-jet Printers.
  • Thermal Printers.
  • Laser Printers.

INK-JET PRINTERS

Ink-jet printers work in the some fashion as dot-matrix printers in that they form images or characters with little dots. However, the dots are formed by tiny droplets of ink. Ink-jet printers form characters on paper by spraying ink from tinny nozzles at a speed of approximately 250 characters per second. Various colors of ink can also be used.

THERMAL PRINTER

Thermal printer use heat to produce an image on special paper. The head is designed to heat the surface of the chemically treated paper so that a dot is produced based on the reaction of the chemical to heat. Thermal printers are expensive and they require special expensive paper.

LASER PRINTER

A laser printer works like a photocopy machine. Laser printer produce images on paper by directing a laser beam at the mirror which bounces the beam onto the drum.

As the paper rolls by the drum, the toner is transferred to the paper printing a letter or other graphics on the paper.

Laser printers use buffers that store an entire page at a time. When a whole page is loaded, it will be printed. The speed of laser printer ranges from 8 to 437 pages per minute if each page contains the 48 lines.

PLOTTER

A plotter is an output device designed to produce high-quality graphics in a variety of colors. The plotter is used to give very neat best quality to the graphics. Plotter is useful for the engineers, artists, designers, architects to make graphics when the size output in more then one page.

There are two basic types of plotters: those that use pens and those that don’t. Drum and flatbed plotter both use the pens. Electrostatic plotters don’t use the pens.

MEMORY/STORAGE

The memory of storage hardware provides the capability to store data and the program instructions. There are two types of storage devices.

  • Primary Storage
  • Secondary Storage

A) Primary Storage

The primary storage refers to the internal storage of the computer, where program and data are stored. Primary storage or primary memory provides temporary storage during program execution. Because primary storage is located inside the computer and is linked directly to the other components of the CPU, access time to the data is very fast.

The following are different memory storage devices.

  • RAM (Random Access Memory)
  • ROM (Read Only Memory)
  • PROM (Programmable Read Only Memory)
  • EPROM (Erasable Programmable Read Only Memory)
  • EEPROM (Electrically Erasable Programmable Read Only Memory)
  • Cache Memory

1) RAM (Random Access Memory)

RAM is the part of primary storage where data and program instructions are held temporarily while being manipulated and executed. This type of memory allows the user to enter data into memory and then to receive it. They are referred to as RAM because any of the locations on RAM can be randomly selected and used.

2) ROM (Read Only Memory)

As the name implies the contents of the ROM can only be red, data cannot be written into it. ROM may contain information about how to start the computer and even instruction to the entire operating system. The ROM contents are unchangeable and permanent. Because the contents cannot be altered and they are not lost when the electric current is turned off.

3) PROM (Programmable Read Only Memory)

Several types of ROM can be programmed according to the user’s specification. PROM allows a chip to be programmed by the user once; then it cannot be altered further.

4) EPROM (Erasable Programmable Read Only Memory)

An EPROM chip has the feature of PROM but it also has the feature or PROM but it also a transparent window. By removing chip and exposing the window to light the contents are erased and chip can be reprogrammed for further application.

5) EEPROM (Electrically Erasable Programmable Read Only Memory)

This memory chip can be erased and reprogrammed electrically so there is no need to remove it from the circuit as with an EPROM. Then are costly then other ROM chips.

6) Cache Memory

A special very high level speed memory is sometimes used to increase the speed of processing by making current programs and data available to CPU at a rapid rate.

This type of memory is also called high speed buffer or cache memory. It is used to store temporary segments of programs currently being executed and/or data frequently needed in the present calculation. By making active programs and data available at a rapid rate, it is possible to increase the rate of CPU.

b) Secondary Memory/Storage Devices

Secondary storage or auxiliary storage is any storage device designed to retain data and instructions in a more permanent form. Secondary storage is non-volatile, meaning that the data and instructions remain unchanged when the computer is turned off.

There are three types of the auxiliary storage.

  • Magnetic Tape.
  • Magnetic Disk.
  • Optical Technology.

1) MAGNETIC TAPE

In 1957 magnetic tape was introduced. Magnetic tape is a plastic tape (magnetizable). Typically magnetic tape is ½ or ¼ inch wide and is produced in a variety of length ranging from 200 to 3, 60 feet. A modern tape can store the equivalent of more than 2.25 million punched cards.

2) MAGNETIC DISK

A magnetic disk is a metallic platter on which data can be stored. Data files on the disk can be read sequentially or directly. The advantages of the magnetic disk are:

The ability to access the stored data directly and it hold more data.

Two main types of the magnetic disk are:

  • Floppy Disk.
  • Hard Disk.

I) FLOPPY DISK

One of the most commonly used storage media is floppy disk. On floppy diskette data can be stored. They are often called “floppy” because they are made of the flexible material. Before using a floppy disk is must be formatted. To store and retrieve data from a diskette, one must place it into a floppy disk drive. The size of a normal floppy disk is 1.44 MB.

II) HARD DISK

It is astonishing to recall back in 1954, when IBM first invented the hard disk, its capacity was 5 MB. Then 25 years later Seagate technology introduced which can store up to 40 MB. It is hard to believe in 1980 the capacity of hard disk was 100 MB. But now-a-days the capacity of hard disks is 20, 40, 60, 80 and 100 of GBs.

Like floppy disks, hard disks must be formatted before use. They can also be divided into partition which could be considered as a physically separate disk.

3) OPTICAL TECHNOLOGY

Optical technology involves the use of laser beams. Following are some kind of the optical disks.

  • CD-ROM (Compact Disk, Read Only Memory)
  • CR-R (Compact Disk Recordable)
  • WORM (Write Once, Read Many)
  • Erasable optical discs
  • CR-RW (Compact Disk-Rewritable)
  • DVD (Digital Video Disk)

I) CD-ROM (Compact Disk, Read Only Memory)

The most popular and least expensive type of the optical disc is CD-ROM. As the name indicates, these disks come pre-processed and cannot be altered. CDs can store up

To 650MB of data. The storage density of CD-ROM is enormous but cost is very low.

II) CD-R (Compact Disk-Recordable)

CD-R can read CD-ROM discs and also write to special CD-R discs. CD-R is a write once technology. An advantage CD-R discs is that they are not expensive.

III) WORM (Write Once, Read Many)

WORM storage had emerged in late 1980’s and was popular with large institutions for the achieving of high volume, sensitive data. When data is written to a WORM drive, physical marks are made on the media surface by a low-powered laser and since these marks are permanent, they can not be erased.

IV) ERASEABLE OPTICAL DISKS

Erasable Optical drives are an alternate to large hard disks. They store about 30% more data than a standard hard disk pack and 14, 00 to 28, 00 times as much as diskettes. In contrast to CD-Rom and WORM disks, erasable disks can be changed and erased.

V) CD-RW (Compact Disk Rewritable)

It is the recordable CR-Rom format that provide full read, write capabilities. It can store approximately 650 MB data & can also be used as standard CD-Rom reader.

VI) DVD (Digital Video Disk)

The newest optical disk format DVD capable of storing more then 17GB and transferring data at a higher speed 12MB. Digital Video Disks are designed to work with a Video Player and TV. For computer, a different format used called DVD-ROM. DVD-ROMs are designed to work with a special drive (DVD-ROM drive) and a computer.

LANGUAGE GENERATIONS

A programming language is a set of symbols that instruct the computer hardware to perform a specific task. Typically a language consists of a vocabulary and a set of rules that a programmer must learn. The computer languages can be divided into the following five generations.

  • First Generation.
  • Second Generation.
  • Third Generation.
  • Fourth Generation.
  • Fifth Generation.
  • Sixth Generation.

1. First Generation

Machine language is the only computer language that the computer can understand directly without translation. It is a language made up of entirely 1s and 0s.

In the computers first generation, the programmer had to use machine language because no other option was available. Machine language programs have the advantage of very fast execution speeds and efficient use of memory. But machine language is very difficult to program.

2. Second Generation

The first step in making the software development easier was the development of assembly language. Assembly language use mnemonic operation codes and symbolic addresses in place of 1 and 0s to represent the operation.

Before usage the assembly language is translated into machine language with the help of assembler.

3. Third Generation

Third generation is also known as high-level-languages, are very like every day text and mathematical formulas in appearance.

Most of high-level-languages are procedural languages, because the program statements consist of statements that tell the computer not only what to do but how to do it. Two types of translators are used. Interpreters and compilers.

4. Fourth Generation

Fourth generation languages are also known as very high level languages. They are non-procedural languages because the use or programmer specifies what the computer is supposed to do without having to specify how the computer is supposed to do.

Five basic types of language tools fall into the 4th generation language.

I. Query Language.

II. Report Generator.

III. Application Generator.

IV. Decision Support Generator.

V. Macro Computer Application Software.

These language tools are usually used in the conjunction with a database and its data dictionary.

5. Fifth Generation

Natural language represents the next step in the development of the programming languages. The text of a natural language statement very closely resembles human speech.

Natural languages already available for the microcomputer include Clout, Q & A, HAL, Visual Prolog etc.

SOFTWARE

The term software refers to a sequence of instructions given to the computer to perform a specific task. A computer can not think about what to do with the data and how to process it. All these defined steps for a particular job will be known as software. In other word we can say that the set of instructions that take the data from the input devices, manipulate and process that data and send it to an out put device is called software.

SYSTEM SOFTWARE

System software is programs which hold instructions related with the working of the software and hardware of the computer system. System software behaves like an interchange and performs the responsibility of overall supervision of the input, processing and output data.

System software is divided into two main categories.

  • Operating system.
  • Translator.

Operating System

Operating system is the most important type of system software and can be defined as a set of programs which co-ordinate and controls computer operations. It is a series of programs which provide communication between the user and the computer hardware.

Main functions are.

  • Job Control.
  • Memory Management.
  • Keep Track of Computer Resources.
  • Produce Error Messages.
  • Multiprogramming.
  • Supervisor.

Translators

The translator is a program that converts source program into its equivalent object program. During the translation process, errors detected by the translator are listed. If there are no errors in the source program, the translation process will convert all instructions into the machine language equivalent.

Types of translators.

There are three types of translators.

  • Assemblers.
  • Interpreter.
  • Compilers.

Assembler

Assembly language instructions will be converted to the machine code with the help of an assembler. Assemblers are translator programs supplied by the computer manufacturers. First the whole program is typed in the Assembly language, and then the assembler converts all the instructions at a time from the assembly language to machine language.

Interpreter

It is another type of Translator used for translating high level languages. This program converts the high level languages such as BASIC statements into machine code. It perform the following functions.

a) Translate the source program, one instruction at a time.

b) No object code is used for the future use.

c) The next time instruction is used, it must once be interpreted and translated into the machine code.

d) It is useful for the statement by statement fault finding.

Compiler

A compiler is a complex program which translates a program written in a high level language to an equivalent machine language program. The process of translation or conversion is called compilation. The whole program is translated completely with a compiler before the machine language program us carried out.

APPLICATION SOFTWARE

Application software is also known as packages that help the user to get his required output. These are the software developed by experts in high/low level languages for non experts. After getting initial training in the packages, a layman can also handle his/her work very easily. Another purpose is to facilitate all fields of the life.

There are two types of the application packages.

  • General Purpose Packages.
  • Special Purpose Packages.

General Purpose Packages

The general purpose packages developed by famous software development companies. This software is developed by team of experts to be utilized by non-experts after minimum training. e.g.

  • Word processors.
  • Spreadsheets.
  • Databases.
  • Graphics.
  • Games etc.

Special Purpose Packages (Customized Software)

These are the software developed according to the need s of an organization. The program is designed for the specific organization keeping in view its requirements. e.g.

  • Banking.
  • Account.
  • Library etc.

WORD PROCESSOR

A word processor is a software or program that manipulates the text i.e. entering, viewing, storing, editing, rearranging and printing text.

Uses of Word Processor

  • Business.
  • Homes.
  • Schools.

Components of the word processing system

The following the main components of the word processing system.

  • A computer with the sufficient RAM.
  • A video display screen.
  • Floppy drive and hard drive.
  • A printer.
  • A word processor is needed.

Reasons to use word processor

  • Text is prepared faster.
  • Typing errors are corrected easily.
  • Revisions are made easily.
  • Many copies of the same document can be produced without any difficulty.
  • Final copy is clean without any mistakes and cutting.

Text Editing Features

A word processor includes the following text editing features.

  • Insert text.
  • Delete text.
  • Undo operation.
  • Search and replace.
  • Block text.

Formatting Features

A word processor includes the following features.

  • Character size can e changed.
  • Font can be changed.
  • Margins, Page Break, Auto numbering and header and footer.

Special Features

A word processor includes the special features such as spell checking, thesaurus, mail merge, grammar checker, footnote, columns.

SPREADSHEETS

A spreadsheet is a grid of rows and columns. Before spreadsheet software the spreadsheets were prepared by hands which often took days for the preparation. If one number is changed, the whole values are to be changed.

A software package that permits users to quickly create, manipulate and analyze data, organized in the columns and rows is called electronic spreadsheet.

So a spreadsheet is a program designed to process information in the form of table.

Manual verses Electronic Spreadsheet

  • Electronic Spreadsheets can be larger then the manual.
  • Electronic spreadsheets can perform calculation by itself using formulas or functions.
  • Cells in the electronic spreadsheets can contain formulas.

Components of the Computer Based Spreadsheet

The following components are necessary.

  • A microprocessor with the sufficient RAM.
  • Spreadsheet software.
  • Secondary storage devices like floppy disk or hard disk for storage purpose.
  • A printer.

Spreadsheet Features

  • Functions.
  • Formulas.
  • Formatting (Heading, Number Symbols, printing).
  • Graphs.

DATABASES

A database is any collection of information stored in an organized way. A database file is made of record. Within a record, information is organized into distinct fields.

A database management system (DBMS) is software that helps you organize data in a way that allows fast and easy access to the data.

Advantages of databases

Data Integrity.

The term data integrity refers to the validity of the data contained in a database. To avoid the data integrity errors, the database programs should use text validation procedures.

Data Independence.

The term data independence refers to the storage of data in such a way that is not locked into use by a particular application.

Avoidance of Redundancy.

The data should be entered once and it should at one place so that when updated; the result will send to the whole database.

Database concepts

  • Creating tables.
  • Creating queries.
  • Creating and printing reports.

CAD/CAM

(COMPTER AIDED DESIGN/COMPUTER AIDED MANUFACTURING)

CAD (COMPUTER AIDED DESIGN)

Computer aided design is the use of hardware and software components, providing a tool for designing and modeling anything and every thing from a small spoon to space shuttles. So there are two main parts. The hardware (Computer, plotter, scanner) and the software (AutoCAD) e.g.

Application of CAD

MCAD (Used by mechanical engineers).

AEC(Architecture, engineering and construction deals with the creation of building).

GIS (geographic information system) uses CAD to generate maps.

CAM (Computer aided manufacturing)

The use of computers to control the factory machines in the manufacturing process is called computer aided manufacturing. The work of CAM actually starts when the work of CAD ends. When the design process CAD and the manufacturing process are combined, they are referred to as CAD/CAM. Today a single computer can control welding machines and the other tools, moving the product from machine to machine as each step in the manufacturing process is completed.

PROCESSING METHODS

The processing methods are techniques used to process/sort different types of data. There are different types of processing methods. Following are commonly used methods.

  • Batch Processing.
  • Online Processing.
  • Real Time Processing.
  • Multiprogramming.
  • Multitasking.
  • Time Sharing.

1. Batch Processing

Batch processing is also known as sequential, serial, or stacked processing. In this processing method, different jobs of different users are combined in their receiving order. Then finally when a batch of jobs is completed, they are given for processing and the jobs will be processed in the same order. Processing a large volume of data in batches over a period of time, results in lower processing costs per transaction.

2. Online Processing

Online processing is also known as direct-access processing. In this processing method, a job is processed at the same time when it is received. Thus online processing systems feature random and rapid input of the transactions and immediate and direct access to record contents as and when needed.

3. Real Time Processing

This mode of processing is designed to allow the computer to use data as they become available. To perform this function the equipment must have an online capability. In this method receiving and processing of data is performed simultaneously and there is no delay in processing of jobs.

4. Multiprogramming

Multiprogramming refers to the simultaneous execution of two or more programs on one computer system. Several jobs can be run, what seems to observer, that the computer is performing his job. However, the computer actually accesses each program in turn.

Programs with the highest priority are in a foreground partition. A program with the lower priority is places in a background partition.

5. Multiprocessing

When two or more processors share common memory and communicate with each other, it is called multiprocessing. Multiprocessing refers to the use of two or more CPUs with in a mainframe.

In this method of processing, multiple CPUs are inter-connected using same memory which forms one complete unit to perform parallel processing of several jobs. In this system two or more programs are simultaneously loaded in the main memory and instructions from these programs are executed by different CPUs at the same time.

6. Time Sharing

This method of processing allows users to use the same CPU time among all the users on a scheduled basis. There is only one CPU serving for more then one user.

The main idea behind time sharing process is to give a chance to each user, from a large group of users to share the CPU to solve the individual problem.

This time period in which the user gets the CPU is called the time slice and is usually equal to 10-2 milli seconds.

PROGRAMMING

A program is a set of step-by-step instructions that direct the computer to do the task user wants it to do and produce the results user want.

The following three reasons will help in understanding why we use programming.

· Programming helps to understand the computer.

· Writing a few simple programs increase one’s confidence.

· Learning programming lets one to find quickly whether one like programming and whether one has the analytical mind that the programmers need.

THE PROGRAMMING PROCESS

The programming process consist of the following main steps.

1. Define the problem:

The task of defining the problem consist of identifying what is the input and what is the output. Eventually one produce a written agreement that specifies the input, processing, and output required

2. Planning the solution:

Two common ways of planning the solution to a problem are to draw the flowchart and to write the pseudo code, or possibly both. A flowchart is a pictorial representation of a step-by-step solution to a program. It consist of arrows representing the direction the program takes and boxes and other symbols representing actions. Pseudo code is an English like non-standard language that lets one to state the solution with more precision that one can plain in English but with less precision than is required when using a formal programming language.

3. Coding the program:

The next step in programming process is coding the program. A programming language is a set of rules that provide a way of instructing the computer what operations to perform. There are many programming languages.

Although programming languages operate grammatically, somewhat like the English language. One has to exactly follow the rules, the syntax of the language one is using. Then the program must be coded in a form that the computer can understand.

4. Testing the program:

Some experts insist that a well-designed program can be written correctly the first time. However the imperfection of the world are still with us, so most programmers get used to the idea that their newly written programs probably have a few errors.

Desk-checking:

In desk-checking one simply sit down and mentally trace, or check the logic of the program to attempt to ensure that it is error free and workable.

Translating:

A translator is a program that:

· Checks the syntax of the program to make sure that the programming used correctly.

· Translate the program into the program that the computer can understand.

Debugging:

Debugging means detecting, locating, and correcting bugs usually by running the program. These bugs are logical errors such as telling the computer to repeat and operation but not telling it how to stop repeating.

5. Documenting the Program:

Documentation is a written detailed description of the programming cycle and specifies facts about the program. Typical program documentation material include the origin and nature of the problem, a brief narrative description of the program, logic tools such as flowcharts and pseudo code, data-record description, program listing and testing results. Comments in the programs are also considered an essential part of the documentation.

FLOWCHARTS

The flowcharts provides a graphical representation of the logic of the problem. It helps in detecting, locating and removing errors. It is simply a method of helping the programmer to organize the sequence of steps necessary to solve a problem as well useful for helping to understand the logic of once program.

FLOWCHARTS SYMBOLS

A set of symbols are required in drawing a flowchart. These symbols have special meanings and have been standardized by ANSI(American National Standard Institute). Some of the commonly used flowchart symbols are discussed below.

TEMINAL SYMBOL

All flowcharts must start and end with Terminal(Oval) symbol. It is used to indicate the start and ending of the flowchart.

PROCESSING SYMBOL

The processing symbol is used to indicate any type of computation and data movement instruction.

INPUT/OUTPUT SYMBOL

The input/output symbol is used to indicate the input ad output operation.

DECISION SYMBOL

The decision symbol contains a decision that can be either true or false. This decision symbol asks a question to determine whether the answer is yes or no.

LOOP SYMBOL

Looping process is used in Programming to show the repetitive sequence of activities.

ON-PAGE CONNECTOR

It is used to connect and continue parts if a flowchart from one place to another on the same page.

OFF-PAGE CONNECTOR

The off-page symbol is used to connect parts of the flowcharts that extend on several pages.

FLOWLINES

Flowchart symbols are connected together by means of arrows. These arrow symbols are known as flow lines.

The following flowchart is used to compute the area of the circle.

ARTIFICIAL INTELLIGENCE

Artificial intelligence is the ability of computers to perform human-like thinking and reasoning. AI is a field of study that explores how computers can be used for tasks that require the human characteristics of intelligence, imagination and intuition. The researchers have been exploring the way people think in hopes of creating a computer that thinks like a person. AI has the complete attention of computer scientists; indeed it is their main focus for present and the future.

AI is a combination of computer science, physiology and philosophy. The element that the fields of AI have common is the creation of machines that can think.

AI has always been on the pioneering end of the computer science. Advanced-level computer languages, as well as computer interfaces and word-processors owe their existence to the research into the AI. The theory and insights brought about by AI research will set the trend in the future of computing.

AREAS OF ARTIFICAL INTELLIGENCE

Specifically the areas of AI are

  • Expert Systems
  • Advanced Robotics.
  • Natural Language Processing.
  • Voice Synthesis.
  • Voice Recognition.
  • Computer Vision.

EXPERT SYSTEMS:

An expert system is that software based on certain concepts of AI, which acts as a consultant or an expert, in a specific field or discipline to solve a problem to help to make a decision. An expert system is able to do the work of a professional. Moreover, a computer system can be trained quickly, has virtually no operating cost, never forgets what it learns, never calls in sick, retires or goes for vacation. Beyond those, intelligent computers can consider a large amount of information that may not be considered by humans.

An expert system can not entirely duplicate a human expert’s judgments or make the final decision; but it can offer opinions, suggest possible diagnosis and suggest various solution for the problem.

Because of their usefulness, expert systems are one of the first results of AI research to become workable commercial product. Oil companies use expert systems to analyze the geological data, while physicians use them to help diagnose and treat illnesses. Professional assistance and emergency management also take advantage of expert system.

Components of Expert System:

An expert system consist of:

  • Knowledge Base.
  • Inference engine.
  • Shells.

Knowledge Base:

The knowledge base is the stored collection of facts on a particular subject and hundreds or thousands of if-then rules by which the facts relate. The base include maxims provided by a group of exerts in that field and thousands of facts and rules of thumb as well as judgments, intuitions and experiences.

Inference Engine:

An inference engine is a reusable program that allows the computer to intelligently apply the facts and information to a particular problem. It retrieves the facts and manipulates the facts and rules, deciding in which order to make connections, associations and inferences. It also has the capability of explaining its reasoning and conclusion.

Shells:

Sometime an expert system takes the form of an empty shell in which the user company provide the facts. It is the expert system with a structure for its knowledge base but it does not contain the knowledge base itself. It includes an inference engine that works well with a variety of different fields.

ROBOTICS:

A robot is an automatic device that performs functions ordinarily ascribed to human beings or that operates with what appears to be almost human intelligence. In the field of AI there are intelligent and unintelligent robots. Most robots are unintelligent; that is they are programmed to do specific tasks, are they are incapable of showing initiative. An unintelligent robot can not respond to a situation for it has not been specifically programmed. Intelligence is provided either by a direct link to the computer or by on-board computers that reside inside the robot. In the future reasoning ability will be incorporated into robotics, thus improving their ability to behave “intelligently”.

The application areas of the Robots:

Robots in the factory:

Most robots are in the factories, spray-painting and welding and taking away jobs. There are a wide variety of sizes and shapes of robots, each designed with a particular use in mind. For example with the help of TV camera eye a robot can see components it is meant to assemble. It is able to pick them up, rearrange then in a right order, or place them in the right position before assembling them.

Robot Vision:

Robot vision has already been successfully implemented in many manufacturing systems. To ‘see’ a computer measures varying intensities of light of a shape. One of the main reasons for the importance of vision is that production-line robots must be able to discriminate among parts.

Field Robots:

There are some dangerous places where human being would not want to be there e.g., inside a nuclear power plant, at the bottom of the sea, on the floor of a volcano, or in the middle of the chemical spill. But robots readily go to all places. Furthermore they go there to do some dirty and dangerous jobs.

Personal Robots:

Another area of interest is the personal robots, familiar to us for the science fiction. Existing personal robots exhibits relatively limited abilities and whether sophisticated home robots can be made cost effective is debatable. Software will allow the robot to bring its owner something to drink.

NATURAL LANGUAGE PROCESSING:

Natural language processing is the ability of computer to understand and translate natural, everyday language. Natural languages are associated with AI because humans can make the best use of the AI if it can communicate with the computers in natural language.

Some natural language word are easy to understand because they represent a definable item: horse chain, chain, mountains etc.

A key function of AI study of the natural language is to develop a computer system that can resolve ambiguities.

There are three elements in the natural language program.

  • One element is the parser, a program component that figures out how a sentence is put together with nouns, verbs and other fragments.
  • A second element is the semantic analyzer that uses a built in dictionary to interpret the meaning of words in the sentence.
  • The third element is the code generator that translates the user’s sentence into the machine language codes acceptable to the computer.

VOICE RECOGNITION:

Most systems use the voice recognition capability only few words. The computer must be trained to recognize the user’s voice; then it accepts the data or instructions by having commands spoken into the computer microphone.

There are other difficulties.

  • Many word sound like other words.
  • Different people pronounce the same word differently.
  • One word can have multiple meaning.
  • The tone in which something is said carries more meaning then the actual words do.

Any program that improves upon these limiting factors will have to be able to interpret all the characteristics that make up conversation.

COMPUTER VISION:

Scientist hope to develop computer that will process and interpret light waves just as human brain does. Such a system would use scanning devices to sense and interpret graphics or text shapes. This computer would then read test in almost every written language.

No comments: