Thursday, December 30, 2010

INFORMATION TECHNOLOGY FUNDAMENTALS, MANAGEMENT AND MAINTENANCE



This is my journal about Information Technology Fundamentals, Management and Maintenance. This is made to define the facts about the course and how it came to be.


What is Information Technology?


Information technology (IT), as defined by the Information Technology Association of America (ITAA), is “the study, design, development, implementation, support or management of computer-based information systems, particularly software applications and computer hardware. …


en.wikipedia.org/wiki/Information_technology


IT Fundamentals by:Rhenel Bernisca states that; “this course covers the introduction to the concepts behind computers and information technology; capabilities and limitations of computers, applications of computers, history of computers, representation of data, machine and assembly languages, high level programming languages, computer system organizations, database systems, computer networks, operating systems, information systems, symbolic logic, current trends and issues.”


http://www.itsmartlab.com/fundamentals.aspx


Introduction to Computers


Definitions of a Computer:


-A programmable machine


-A computer is a machine that manipulates data according to a list of instructions.


-A computer can also be defined as an electronic machine that accepts input (data), processes it and gives out results (information)


Computer Development


In the beginning, there were no computers and calculators. People used their fingers to compute, count, or calculate. To add or subtract, a person used his/her fingers or toes. Later, they used pebbles in piles, scratches on sticks or rocks, and eventually charcoal or chalk marks along the paths to the caves to record and store information. Counting and measuring devices were added in building the steps toward the computer age.


Prehistoric people did not have much data to count. The most natural way to count was using their fingers. The method did not meet their needs so they used pebbles to perform simple arithmetic. (Calculation comes from the Latin word “calculi” which means “pebbles”.)


Some people say the Stonehenge is like a kind of computer. Prehistoric people based their calendars on the position of the shadows of the stones. The stones serve as the computer, sunlight as the input, and the calendar as the output.


Early Computing Devices



Abacus, instrument used in performing arithmetic calculations. It consists essentially of a tablet or frame bearing parallel wires or grooves on which counters or beads are moved. Emerging around ancient times five thousand years ago in Asia Minor, it quickly became the favoured arithmetic tool of early merchants. Some people consider the abacus as the ultimate ancestor of today’s computer. Abacus is still used in China, Japan, and Korea.




Slide Rule, invented by an Englishman William Oughtred. Prior to the invention of the hand-held calculator, the slide rule was a standard tool for engineers and scientists. Operating on the principle that all mathematical computations may be carried out on sets of sliding scales, the device looks much like a heavily calibrated ruler with a movable midsection. The midsection, called the sliding center scales, is engraved with fine lines to allow the user to align different logarithmic scales rapidly and efficiently. Multiplication, addition, subtraction, division, squaring, cubing, extracting roots, and more complicated calculations were computed regularly by adept users until well into the 1660s.



Napier’s Rods or Bones, in the early 17th century John Napier, a mathematician from Scotlanddevelop Napier’s bones in 1671. These rods are used to multiply large numbers. His bones are set of eleven rods with numbers marked on them in such a way that by simply placing the rods side by side. The sticks were called bones because they were made of bone or ivory. Napier’s bones represented a significant contribution to the development of computing devices.



Mechanical Computers:



Pascaline. The forerunner of the calculator was an adding machine invented by a Frenchman, Blaise Pascal, in 1642. it was called Pascaline. It can add long columns of numbers. It consisted of cogged wheels with each gear tooth representing a digit from 0 to 9. The numbers of each digit position were arranged on a wheel so that a single revolution of one wheel resulted in one-tenth of a revolution of the wheel to its left. It performed carries or regrouping from one column to the next by using a counter-gear. This device helped his father calculate tax revenues.




Leibniz’s Calculator. Gottfried Wilhelm Von Leibniz, with Isaac Newton, invented a calculating machine that could perform all of the four operations: addition, subtraction, multiplication, and division. It worked by a system of gears and dials. It was used for computing scientific or mathematical tables. Von Leibniz was able to successfully introduce an automatic calculator into the business marketplace of his day. Originally designed in 1673 and first built in 1694.


Jacquard’s Loom. In 1804, Joseph Marie Jacquard perfected the automated loom. This device was weaving machine. It used punched card to create a design of cloth. The invention marked the first machine to create something new. The coded cards were the forerunner of punched cards.





Arithmometer. Charles Xavior Thomas of Colmar, France, is the first to manufacture mechanical calculating machines as an independent industry. Based on Pascal and Leibniz, the Arithometer was the first truely successful desktop calculator commercially sold and distributed. This mechanical calculator can do the four mathematical operations. With its enhanced versatility, it was widely used until the First World War.




Difference Engine. In 1882, Charles Babbage, an English mathematics professor, proposed a machine to perform differential equation called Difference Engine. It was powered by steam and large locomotors. It stored programs, performed calculations, and printed results automatically.


The Difference Engine was designed to aid the calculation of mathematical, celestial, and navigational tables in hopes to reduce the number of lost ships at sea. It unfortunately had many downfalls, including the fact that it was a very specialized machine which could only be practically used to perform one calculation. In order to perform a different calculation, the gears would have to be changed, making it very impractical.






Analytical Engine. After ten years, the first calculating machines that could be labeled a computer were developed in England by Charles Babbage. It was called Analytical Engine. It could calculate and print logarithmic tables. It was designed as a general purpose computer because it processed information on its own and used punched cards that could instruct the machine to repeat certain operations. Though it was never completed, it remained to be the prototype or the next generation computer designs. The engine has two basic components: a storage unit with a memory device consisting of groups of 50 counter wheels that store 1000 figures of 50 digits each, and the arithmetic desk-calculator section, the mill that used binary numbers.


Analytical Engine in the mid-19th century became the world’s first digital computer. Because of this, Charles Babbage is now known as the Father of Modern Computer.





Lady Ada Augusta Byron helped Babbage in developing this computer. She was called the World’s First Computer Programmer.



Hollerith Tabulator. In 1990, Dr. Herman Hollerith invented a tabulating machine and card sorter for census of the U.S. Census Bureau. It’s forerunner of the key punch of the 1930s. Data were translated into a series of holes in a punched card. To represent the digits and the letters of the alphabet. Then these were passed through a machine with a series of electrical contacts using “on or off switches.” The code on the punched cards or paper tapes of telex machine being used today is called Hollerith Code. In 1924, his Tabulating Machine Company’s name was changed to International Business Machine (IBM).



Mark I the forerunner of the computer used today. Howard Aiken of Harvard Universitycombined the technology of Babbage’s concept and efforts of his colleagues in Harvard. With IBM, the Automatic Sequence Controlled Calculator was finished in 1944. It was called Mark I. It could perform the four mathematical operations in a specified sequence determined by the setting of the switches. Then it typed its answer on a typewriter connected to it or on punched cards after few seconds. It contained more than 3000 electromechanical relays and weighed five tons. It was used for 15 years. Grace Hopper, a programmer, coined the wordbug after the Mark I had made the system stop.



Generation of Computers


The history of computer development is often referred to in reference to the different generations of computing devices. A generation refers to the state of improvement in the product development process. This term is also used in the different advancements of new computer technology. With each new generation, the circuitry has gotten smaller and more advanced than the previous generation before it. As a result of the miniaturization, speed, power, and computer memory has proportionally increased. New discoveries are constantly being developed that affect the way we live, work and play.


Each generation of computers is characterized by major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices. Read about each generation and the developments that led to the current devices that we use today.




The First Generation – The Vacuum Tubes(1940 – 1956)


The computers then used vacuum tubes which were magnetic for circuitry and magnetic drums for memory. A magnetic drum,also referred to as drum, is a metal cylinder coated with magnetic iron-oxide material on which data and programs can be stored. Magnetic drums were once use as a primary storage device but have since been implemented as auxiliary storage devices. They were huge and occupied entire rooms. They were expensive to operate because they used a great deal of electricity. This generated a lot of heat which caused the computers to malfunction. They relied on machine language which performed one problem at a time. Input was based on punched card and paper tape while output was displayed on printouts.



¬ In 1941, German Engineer Konrad Zuse used Z3 computers to design airplanes and missiles.


¬ In 1942, they used the magnetic tape system as a storage unit to help the users to have fast access to data.


¬ In 1943, the British developed a secret code-breaking computer called Colossus to decode German messages.


¬ 19 1944, Howard Aiken produced an all-electronic calculator, to create ballistic charts for the U.S. Navy.




ENIAC – Electronic Numeric Integrator and Calculator. The first electronic digital computer.


It was completed in 1946 at the Moore School of Electrical Engineering of the Universityof Pennsylvania. It had no moving parts. It was programmable and had the capability to store problem calculations. It was designed and constructed by John Presper Eckert, Jr. and John Mauchly. It differed from other electro-mechanical computing machines in its time since it was used vacuum tubes (about 18,000 in number).


The ENIAC could add in 0.2 of a millisecond or about 5,000 computations per second. It could calculate a weapon’s trajectory angle in 20 seconds. It is one of the human computers the army had hired to do the calculations by hand. However, ENIAC had the disadvantages of size and processing ability. It occupied 1,500 square feet of floor space.


In 1945, the Electronic Discrete Variable Automatic Computer (EDVAC) used the concept of Dr. John Von Neumann. The concept was a program to control the steps of a calculation which reside in the computer along with data being used for calculations.



EDSAC - Electronic Delay Storage Automatic Computer. The first electronic-stored program computer.


This computer was capable of storing a sequence of instructions. It was the first computer program. It was built by Maurice Wikes of Cambridge University in England in 1949.



UNIVAC – Universal Automatic Computer The first commercial computer.


The UNIVAC was introduced in 1950 by Remington Rand and became the first commercially-available computer. It was constructed with vacuum tubes. It was big, bulky, and generated so much heat that it required an air-condition room. It could calculate at the rate of 10,000 additions per second. It received national attention in 1952 when it predicted Dwight D. Eisenhower’s victory in the presidential election.


In 1957, the International Business Machines Corporation (IBM) developed its own first-generation computer called IBM 704 which could perform 100,000 calculations per second.


The early 1950s also brought the development and acceptance of the magnetic tape, a great technological advancement. Its compact, portable medium size permitted sequential storage of millions of characters of data and its rapid transfer to the computer. Data could move up to 75 times faster than with other available methods. Magnetic tape storage operates like the home tape recorder.



The Second Generation – Transistor Technology(1956 – 1963)


By 1948, the transistor replaced the large vacuum tubes in television, radios, and computers. Transistor is a device composed of semiconductor material that amplifies a signal or opens or closes a circuit. Invented in 1947 at Bell Labs, transistors have become the key ingredient of all digital circuits, including computers. Today's latest microprocessor contains tens of millions of microscopic transistors. It worked in the computer in 1956. It led to second-generation computers that were smaller, faster, cheaper, more reliable, and more energy-efficient.


The second-generations computer still relied on punched card for input and printout for output. The move from cryptic binary machine language to symbolic or assembly languages that let programmers specifies commands in words. High-level programming such as COBOL (Common Business Oriented Language) and FORTRAN (Formula Translator) were used. They were developed for the atomic energy industry.



The second-generation computers



¬ IBM 604 – was used in accounting machines and has a card-programming calculator.


¬ STRETCH – produced by IBM


¬ LARC (Livermore Atomic Research Computer)


These computers were developed for atomic energy laboratories to handle a lot of data. They were used in business, universities, government. They also contained components such as printers, tape storage, disk storage, memory, operating system, and stored programs. The main memory capacity was improved by the use of magnetic cores of silicon. Data storage was done through the introduction of disk files.


¬ IBM 401(Model T) – was universally run by a program inside the computer’s memory and could print invoices and design products and calculate pay checks.


¬ HONEYWELL400 – was widely used by most business, companies, universities, and government organizations and was cheaper.


¬ PDP-8 – was introduced by the Digital Equipment Corporation in 1963.





Third-Generation Technology – Integrated Circuits(1964-1971)



Jack Kilby, an engineer for Texas Instruments developed the Integrated Circuit (IC) in 1958. This is a combination of three electronic components onto a small silicon disc made from non-metallic quartz. IC is a complete electronic circuit.


These third generation computers started the use of an operating system that allowed machines to run many different programs at a time with a central program that monitored and coordinated the computer’s memory. They were characterized by solid state technology and integrated circuitry with extreme miniaturization, by increased multi-programming and by virtual storage memory (secondary storage such as disks and tapes.)


The third-generation computers



*IBM360 Series – use the random access of data form computer files.


* IBM370 Series – used silicon chips only eight hundredths of an inch square.


* Minicomputer was developed by the Digital Equipment Corporation.



The development of the chip preceded the development of the microcomputer. Microcomputers display data in color, retain data in disk files, and use voice synthesizers to “talk” with their users. Instead of punched cards and printouts, users interacted through keyboard and monitors.




The Fourth-Generation – Microprocessor(1971-1995)


The microprocessor is an extension of the third-generation technology and describes as evolutionary. The Apple II computer was a personal computer that offered an easy-to-use keyboard and screen. The IBM PC was introduced and became the standard for the microcomputer industry.


In 1971, the fourth-generation computers used the microprocessor, a general-purpose processor-on-chip. The microprocessor can be found almost everywhere like digital watches, calculators, and personal computers.


Computers built after 1972 were based on LSI (Large Scale Integration) of circuits – typically 500 or more components on a chip. By 1980, VLSI (Very Large Scale Integration) squeezed 100,000 components onto chip. Ultra-Large Scale Integration (ULSI) increased that number into millions. This generation diminished the size and price of computers. It also increased their power, efficiency, and reliability. These microcomputers came complete with user-friendly software packages that offered even non technical-user an array of applications.



The Fifth-Generation – Artificial Intelligence(Beyond)


The fifth generation of electronic computers has started. It will contain computers that can learn logic and reason. That means that the advance invention in the science of computer design and technology are coming together to create fifth-generation computers. Now, computers are able to accept spoken word instructions (voice recognition) and imitate human reasoning. The ability to translate a foreign language is also possible. The goal of fifth generation computing is to developed devices that response to natural language input and is capable of learning and self-organization. Gigahertz and Terahertz chips are manufactured. Intra-device mobility like cell phones, desktop computers, laptops, or palm computers will be able to communicate with each other while walking. Other discoveries wireless networks, longer rechargeable laptop, and nanotechnology – are the things for the fifth generation.



¬ Nanotechnology - the creation and use of materials or devices at extremely small scales.


¬ Fuzzy Logic – refers to methods that allow machines to think as humans do.


¬ Neural Network - a system of electrical circuits designed to perform in a similar way to the human nervous system, especially a computer system mimicking the human brain


¬ Natural Language – refers to systems capable of translating ordinary human commands into a language computer can understand and responds to.


¬ Expert System – refer to programs that copy human experts’ decision making and problem solving thought processes.


¬ Robotics – refers to machines that are capable of moving and relating to objects like humans.


¬Communications technology, also called telecommunications technology, consists of electromagnetic devices and systems for communicating over long distances.


¬Online means using a computer or other information device, connected through a voice or data network, to access information and services from another computer or information device.


¬Cyberspace encompasses not only the online world and the Internet in particular but also the whole wired and wireless world of communications in general.


-The two most important aspects of cyberspace are the Internet and that part of it known as the World Wide Web.


Internet is the mother of all networks also known as the world wide network. It is also the one that is responsible in connecting smaller networks in different countries. While the World Wide Web is the multimedia part of the internet.


Machines/Appliances:


-A machine is a device that uses energy.


Categories:


*Household Appliances


-Small Appliances


-Major Appliances


-Tools/Gadgets


*Information Appliances


*Computer Appliances


-Internet Appliances


-Network Appliances


*Data is a fact that comes in the form of a number, picture or a statement and also the raw material to produce information.


*Information is a data that has meaning within a context and it is a data that has been manipulated.


Information Processing Cycle:


-Input


-Process


-Output


-Storage


-Communication


Hardware:


Components of Computer Hardware:


-Central Processing Unit is the portion of a computer system that carries out the instructions of a computer program, and is the primary element carrying out the computer's functions.


-Input Devices is any peripheral (piece of computer hardware equipment) used to provide data and control signals to an information processing system (such as a computer).


-Output Devices is any piece of computer hardware equipment used to communicate the results of data processing carried out by an information processing system (such as a computer) to the outside world.


-Primary Storage (or main memory or internal memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required.


-Secondary Storage (also known as external memory or auxiliary storage), differs from primary storage in that it is not directly accessible by the CPU.


-Communication Devices, provides for the flow of data from external computer networks to the CPU, and from the CPU to computer networks.



Computer Time:


millisecond is a thousandth (1/1,000) of a second.


microsecond is equal to 1000 nanoseconds or 1/1000 millisecond.


nanosecond is equal to 1000 picoseconds or 1/1000 microsecond.


picosecond is equal to 1000 femtoseconds, or 1/1000 nanosecond.


Types of Computers:(based on Processing Capabilities)


Supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.



Mainframes (often colloquially referred to as "big iron"[1]) are powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.



Midrange computers, or midrange systems, are a class of computer systems which fall in between mainframe computers and microcomputers. The range emerged in the 1960s and were more generally known at the time as minicomputers.



Microcomputers is a computer with a microprocessor as its central processing unit. They are physically small compared to mainframe and minicomputers. Many microcomputers (when equipped with a keyboard and screen for input and output) are also personal computers (in the generic sense).



Mobile device (also known as a handheld device, handheld computer or simply handheld) is a pocket-sized computing device, typically having a display screen with touch input and/or a miniature keyboard. In the case of the personal digital assistant (PDA) the input and output are often combined into a touch-screen interface.




Computer Size:



Categories of the Number System:





Binary Arithmetic



For some important aspects of Internet engineering, most notably IP Addressing, an understanding of binary arithmetic is critical. Many strange-looking decimal numbers can only be understood by converting them (at least mentally) to binary.


All digital computers represent data as a collection of bits. A bit is the smallest possible unit of information. It can be in one of two states - off or on, 0 or 1. The meaning of the bit, which can represent almost anything, is unimportant at this point. The thing to remember is that all computer data - a text file on disk, a program in memory, a packet on a network - is ultimately a collection of bits.





If one bit has two different states, how many states do two bits have? The answer is four. Likewise, three bits have eight states. For example, if a computer display had eight colors available, and you wished to select one of these to draw a diagram in, three bits would be sufficient to represent this information. Each of the eight colors would be assigned to one of the three-bit combinations. Then, you could pick one of the colors by picking the right three-bit combination.





A common and convenient grouping of bits is the byte or octet, composed of eight bits. If two bits have four combinations, and three bits have eight combinations, how many combinations do eight bits have? If you don't want to write out all the possible byte patterns, just multiply eight twos together - one two for each bit. Two times two is four, so the number of combinations of two bits is four. Two times two times two is eight, so the number of combinations of three bits is eight. Do this eight times - or just compute two to the eighth power - and you discover that a byte has 256 possible states.





Obviously, if a byte has 256 possible states, its exact state can be represented by a number from 1 to 256. However, since zero is a very important number, a byte is more typically represented by a number from 0 to 255. This is very common, and with bit pattern 00000000 representing zero, and bit pattern11111111 representing 255. The numbers matching these two patterns, and everything in between, can be computed by assigning a weight to each bit, multiplying each bit's value (0 or 1) by its weight, and then adding the totals. For example, here's how 217 is represented as 11011001 in binary:





To convert a number from decimal to binary, begin at leftmost bit position (128). If the number is larger than or equal to the bit's weight, write a 1 in the bit position, subtract the bit's weight from the number, and continue with the difference. If the number is less than the bit's weight, write a 0 in the bit position and continue without any subtraction. Here's an illustration of converting 141 to binary:





There is a simpler way to convert bytes back and forth between binary and decimal; akin to memorizing multiplication tables. The byte can split into two four-bit halves, each half called a nibble.


Memorize the decimal values for the high nibble (they're just the multiples of 16). The low nibble is trivial. Every number between 0 and 255 is the sum of one of the high nibble values and one of the low nibble values. Write the high nibble next to the low nibble, and you have the byte value in binary. Conversely, an eight-bit binary byte can be split in half, each nibble converted to decimal and two decimal numbers added together.





http://www.cotse.com/CIE/Topics/19.htm



Binary Addition & Subtraction:



Let's first take a look at decimal addition.


As an example we have 26 plus 36,



26



+36



--------



To add these two numbers, we first consider the "ones" column and calculate 6 plus 6, which results in 12.


Since 12 is greater than 9 (remembering that base 10 operates with digits 0-9), we "carry" the 1 from the "ones"


column to the "tens column" and leave the 2 in the "ones" column.


Considering the "tens" column, we calculate 1 + (2 + 3), which results in 6. Since 6 is less than 9, there is


nothing to "carry" and we leave 6 in the "tens" column.


26


+36


--------


62


Binary addition


works in the same way, except that only 0's and 1's can be used, instead of the whole spectrum of 0-9. This


actually makes binary addition much simpler than decimal addition, as we only need to remember the


following:


0 + 0 = 0


0 + 1 = 1


1 + 0 = 1


1 + 1 = 10


As an example of binary addition we have,


101


+101


-------


a) To add these two numbers, we first consider the "ones" column and calculate 1 + 1, which (in binary) results


in 10. We "carry" the 1 to the "tens" column, and the leave the 0 in the "ones" column.


b) Moving on to the "tens" column, we calculate 1 + (0 + 0), which gives 1. Nothing "carries" to the "hundreds"


column, and we leave the 1 in the "tens" column.


c) Moving on to the "hundreds" column, we calculate 1 + 1, which gives 10. We "carry" the 1 to the


"thousands" column, leaving the 0 in the "hundreds" column.


101


+101


------


1010


Another example of binary addition:


1011


+1011


-------


10110


Note that in the "tens" column, we have 1 + (1 + 1), where the first 1 is "carried" from the "ones" column.


Recall that in binary,


1 + 1 + 1 = 10 + 1


= 11 Binary subtraction


is simplified as well, as long as we remember how subtraction and the base 2 number system. Let's first look at


an easy example.


111


- 10


--------


101


Note that the difference is the same if this was decimal subtraction. Also similar to decimal subtraction is the


concept of "borrowing." Watch as "borrowing" occurs when a larger digit, say 8, is subtracted from a smaller


digit, say 5, as shown below in decimal subtraction.


35


- 8


--------


27


For 10 minus 1, 1 is borrowed from the "tens" column for use in the "ones" column, leaving the "tens" column


with only 2. The following examples show "borrowing" in binary subtraction.


10 100 1010


- 1 - 10 - 110


-----------------------------


1 10 100


http://www.itsmartlab.com/nel%20lectures/itfundamentals/binaryadditionsubtraction.pdf




Octal Addition Table:

  | 0  1  2  3  4  5  6  7 

---+-----------------------
  1 | 1  2  3  4  5  6  7 10 
0 | 0 1 2 3 4 5 6 7
2 | 2 3 4 5 6 7 10 11
3 | 3 4 5 6 7 10 11 12
  6 | 6  7 10 11 12 13 14 15 
4 | 4 5 6 7 10 11 12 13
5 | 5 6 7 10 11 12 13 14
  7 | 7 10 11 12 13 14 15 16
Binary Multiplication Table:

| 0 1
---+-----
0 | 0 0
1 | 0 1

Binary multiplication can be achieved in a similar fashion to multiplying decimal values.

Using the long multiplication method, ie, by multiplying each digit in turn, and then adding the values together.

For example, lets do the following multiplication: 1011 x 111 (decimal 1110 x 710)



which gives us 1001101, now we can convert this value into decimal, which gives 7710

So the full calculation in decimal is 1110 x 710 = 7710 (correct !!)


note: Notice the pattern in the partial products, as you can see multiplying a binary value by two can be achieved by shifting the bits to the left and adding zeroes to the right.





Dividing binary numbers


Like multiplication, dividing binary values is the same as long division in decimal.

For example, lets do the following division: 1001 ÷ 11 (decimal 910 ÷ 310)



which gives us 0011, now we can convert this value into decimal, which gives 310

So the full calculation in decimal is 910 ÷ 310 = 310 (correct !!)


note: Dividing a binary value by two can also be achieved by shifting the bits to the right and adding zeroes to the left.




Octal:


80 = 1


81 = 8


82 = 64


83 = 512


84 = 4096


85 = 32768


Adding Octal Numbers:




Locate the 6 in the X column of the figure. Next locate 5 in the Y column. The point in area Z where the two columns intersect is the sum. Therefore:



http://www.tpub.com/content/neets/14185/css/14185_38.htm



Subtraction of Octal Numbers:



The subtraction of octal numbers follows the same rules as the subtraction of numbers in any other number system. The only variation is in the quantity of the borrow. In the decimal system, you had to borrow a group of 10


10. In the binary system, you borrowed a group of 210. In the octal system you will borrow a group of 810.


Consider the subtraction of 1 from 10 in decimal, binary, and octal number



systems:





In each example, you cannot subtract 1 from 0 and have a positive difference. You must use a borrow from the next column of numbers. Let's examine the above problems and show the borrow as a decimal quantity for clarity:



When you use the borrow, the column you borrow from is reduced by 1, and the amount of the borrow is added to the column of the minuend being subtracted. The following examples show this procedure:



In the octal example 78 cannot be subtracted from 68, so you must borrow from the 4. Reduce the 4 by 1 and add 108 (the borrow) to the 68 in the minuend. By subtracting 78 from 168, you get a difference of 78. Write this number in the difference line and bring down the 3. You may need to refer to table 1-4, the octal addition table, until you are familiar with octal numbers. To use the table for subtraction, follow these directions. Locate the subtrahend in column Y. Now find where this line intersects with the minuend in areaZ. The remainder, or difference, will be in row X directly above this point.



Octal Multiplication Table:

| 0 1 2 3 4 5 6 7
---+-----------------------
0 | 0 0 0 0 0 0 0 0
1 | 0 1 2 3 4 5 6 7
2 | 0 2 4 6 10 12 14 16
3 | 0 3 6 11 14 17 22 25
4 | 0 4 10 14 20 24 30 34
5 | 0 5 12 17 24 31 36 43
6 | 0 6 14 22 30 36 44 52
7 | 0 7 16 25 34 43 52 61





Hexadecimal:



160 = 1



161 = 16



162 = 256



163 = 4096



164 = 65536


165 = 1048576



Hexadecimal Addition Table:




| 0 1 2 3 4 5 6 7 8 9 A B C D E F
---+-----------------------------------------------
0 | 0 1 2 3 4 5 6 7 8 9 A B C D E F
1 | 1 2 3 4 5 6 7 8 9 A B C D E F 10
2 | 2 3 4 5 6 7 8 9 A B C D E F 10 11
3 | 3 4 5 6 7 8 9 A B C D E F 10 11 12
4 | 4 5 6 7 8 9 A B C D E F 10 11 12 13
5 | 5 6 7 8 9 A B C D E F 10 11 12 13 14
6 | 6 7 8 9 A B C D E F 10 11 12 13 14 15
7 | 7 8 9 A B C D E F 10 11 12 13 14 15 16
8 | 8 9 A B C D E F 10 11 12 13 14 15 16 17
9 | 9 A B C D E F 10 11 12 13 14 15 16 17 18
A | A B C D E F 10 11 12 13 14 15 16 17 18 19
B | B C D E F 10 11 12 13 14 15 16 17 18 19 1A
C | C D E F 10 11 12 13 14 15 16 17 18 19 1A 1B
D | D E F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C
E | E F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D
F | F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E



Hexadecimal Multiplication Table:

| 0 1 2 3 4 5 6 7 8 9 A B C D E F
---+-----------------------------------------------
0 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 | 0 1 2 3 4 5 6 7 8 9 A B C D E F
2 | 0 2 4 6 8 A C E 10 12 14 16 18 1A 1C 1E
3 | 0 3 6 9 C F 12 15 18 1B 1E 21 24 27 2A 2D
4 | 0 4 8 C 10 14 18 1C 20 24 28 2C 30 34 38 3C
5 | 0 5 A F 14 19 1E 23 28 2D 32 37 3C 41 46 4B
6 | 0 6 C 12 18 1E 24 2A 30 36 3C 42 48 4E 54 5A
7 | 0 7 E 15 1C 23 2A 31 38 3F 46 4D 54 5B 62 69
8 | 0 8 10 18 20 28 30 38 40 48 50 58 60 68 70 78
9 | 0 9 12 1B 24 2D 36 3F 48 51 5A 63 6C 75 7E 87
A | 0 A 14 1E 28 32 3C 46 50 5A 64 6E 78 82 8C 96
B | 0 B 16 21 2C 37 42 4D 58 63 6E 79 84 8F 9A A5
C | 0 C 18 24 30 3C 48 54 60 6C 78 84 90 9C A8 B4
D | 0 D 1A 27 34 41 4E 5B 68 75 82 8F 9C A9 B6 C3
E | 0 E 1C 2A 38 46 54 62 70 7E 8C 9A A8 B6 C4 D2
F | 0 F 1E 2D 3C 4B 5A 69 78 87 96 A5 B4 C3 D2 E1



Substraction is
equivalent to adding a negative number, and division is
equivalent to multiplying by the inverse.



Conversion:Octal to Decimal


Original Number:     2    3  4

                           |  |  | 


How Many Tokens:     2    3  4 

Digit/Token Value:  64    8  1 

Value:                   128 + 24  + 4 = 156

 

Hexadecimal to Decimal 

 

Original Number:     4    B    3

                     |    |    | 

How Many Tokens:     4   11    3

Digit/Token Value: 256   16    1 

Value:            1024 +176  + 3 = 120

 

Decimal to Binary

Note that the desired base is 2, so we repeatedly divide the given decimal number by 2.

Quotient Remainder
-----------------------------
1341/2 = 670 1 ----------------------+
670/2 = 335 0 --------------------+ |
335/2 = 167 1 ------------------+ | |
167/2 = 83 1 ----------------+ | | |
83/2 = 41 1 --------------+ | | | |
41/2 = 20 1 ------------+ | | | | |
20/2 = 10 0 ----------+ | | | | | |
10/2 = 5 0 --------+ | | | | | | |
5/2 = 2 1 ------+ | | | | | | | |
2/2 = 1 0 ----+ | | | | | | | | |
1/2 = 0 1 --+ | | | | | | | | | | (Stop when the quotient is 0)
| | | | | | | | | | |
1 0 1 0 0 1 1 1 1 0 1 (BIN; Base 2)

Decimal to Octal

divide by 8

Quotient Remainder
-----------------------------
1341/8 = 167 5 --------+
167/8 = 20 7 ------+ |
20/8 = 2 4 ----+ | |
2/8 = 0 2 --+ | | | (Stop when the quotient is 0)
| | | |
2 4 7 5 (OCT; Base 8)

Decimal to Hexadecimal

divide by 16

Quotient Remainder
-----------------------------
1341/16 = 83 13 ------+
83/16 = 5 3 ----+ |
5/16 = 0 5 --+ | | (Stop when the quotient is 0)
| | |
5 3 D (HEX; Base 16)

Subtraction is
equivalent to adding a negative number, and division is
equivalent to multiplying by the inverse.


Central Processing Unit (CPU):

is the portion of a computer system that carries out the instructions of a computer program, and is the primary element carrying out the computer's functions. The central processing unit carries out each instruction of the program in sequence, to perform the basic arithmetical, logical, and input/output operations of the system. This term has been in use in the computer industry at least since the early 1960s. The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation remains much the same.

System Unit:
computer case (also known as a computer chassis, cabinet, box, tower, enclosure, housing, system unit or simply case) is the enclosure that contains most of the components of a computer (usually excluding the display, keyboard and mouse). A computer case is sometimes incorrectly referred to metonymously as a CPU or hard drive referring to components housed within the case. CPU was a more common term in the earlier days of home computers, when peripherals other than the motherboard were usually housed in their own separate cases.

Parts of the System Unit :























Motherboard:

is the central printed circuit board (PCB) in many modern computers and holds many of the crucial components of the system, while providing connectors for other peripherals. The motherboard is sometimes alternatively known as the main board, system board, or, on Apple computers, the logic board. It is also sometimes casually shortened to mobo.























How the CPU works?













Control Unit:
in general is a central (or sometimes distributed but clearly distinguishable) part of the machinery that controls its operation, provided that a piece of machinery is complex and organized enough to contain any such unit.






















Arithmetic – Logic Unit:
In computing, an arithmetic logic unit (ALU) is a digital circuit that performs arithmetic and logical operations. The ALU is a fundamental building block of the central processing unit (CPU) of a computer, and even the simplest microprocessors contain one for purposes such as maintaining timers. The processors found inside modern CPUs and graphics processing units (GPUs) accommodate very powerful and very complex ALUs; a single component may contain a number of ALUs.














MACHINE INSTRUCTION CYCLE:

The cycle of computer processing, whose speed is measured in terms of the number of instructions a chip processes per second.

>Clock Speed:
The clock speed of a CPU is defined as the frequency that a processor executes instructions or that data is processed. This clock speed is measured in millions of cycles per second or megahertz (MHz). The clock itself is actually a quartz crystal that vibrates at a certain frequency when electricity is passed through it. Each vibration sends out a pulse or beat, like a metronome, to each component that's synchronized with it.

>Word Length:

The number of bits, digits, characters, or bytes in one word.

>Bus Width:
The size of the physical paths down which the data and instructions travel as electrical impulses on a computer chip.

>Line width
The distance between transistors; the smaller the line width, the faster the chip.
Current Microprocessors:


















Basic CPU Structure:















Inside the CPU:










Primary Storage:
Also known as internal memory and main memory, primary storage is a storage location that holds memory for short periods of times while the computer is on. For example, computer RAM and cache are both examples of a primary storage device. This type of storage is the fastest type of memory in your computer and is used to store data while it's being used. For example, when you open a program data is moved from the secondary storage into the primary storage.














Register:
When referring to a computer processor, register refers to the location within your processor used to store and process information.

RAM:
Short for Random Access Memory, RAM, also known as main memory or system memory, is a term commonly used to describe the memory within a computer. Unlike ROM, RAM is a volatile memory and requires power; if power is lost, all data is also lost.

Cache:
Cache is a high-speed access area that can be either a reserved section of main memory or a storage device. The two main types of cache are: memory cache and disk cache. Memory cache is a portion on memory of high-speed static RAM (SRAM) and is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM.

ROM:
Short for Read-Only Memory, ROM is memory that is capable of holding data and being read from; however, it is not capable of being written to or having its data modified. Unlike RAM, ROM is non-volatile and capable of keeping its contents regardless if it has power or not.

Flash Memory:
Computer memory developed by Intel Corp. Flash memory is non-volatile memory that is an integrated circuitthat does not need continuous power to retain the data. It is much more expensive than magnetic storage and is therefore not practical as a replacement for current hard disks or diskettes. However, flash memory is widely used with car radios, cell phones, digital cameras, PDAs, MP3 players, and printers.



Running a Program in a Computer:
















Input / Output Devices:
A hardware device that accepts inputted information and also has the capability of outputting that information. Good examples of an input/output devices are a floppy diskette drive and a hard disk drive.

Classification of I/O Devices:

>Secondary Storage Devices:

Also known as external memory and auxiliary storage, secondary storage is a storage medium that holds information until it is deleted or overwritten regardless if the computer has power. For example, a floppy disk driveand hard disk drive are both good examples of secondary storage devices. As can be seen by the below picture there are three different types of storage on a computer, although primary storage is accessed much faster than secondary storage because of the price and size limitations secondary storage is used with today's computers to store all your programs and your personal data.


*Magnetic Tape-
•kept on a large open reel or in a small cartridge or cassette
•inexpensive, relatively stable, and long lasting, and can store very large volumes of data
•uses sequential access

*Magnetic Disk-
•also called hard disks
•uses direct access where users can go directly to the address without having to go through intervening locations looking for the right data to retrieve.
•is like a phonograph containing a stack of metal-coated platters (usually permanently mounted) that rotate rapidly.

*Hard Disks and Disk Interfaces-
•RAID (Redundant Arrays of Inexpensive Disks) – combines large number of small disk drives; lower cost
•EIDE (Enhanced Integrated Drive Electronics) – supports up to four disks, tapes, or CD ROM drives; relatively inexpensive; Serial ATA (SATA) is its latest version
•SCSI (Small Computer Systems Interface) – used for graphics workstations, server-based storage, and large databases; higher cost

*Optical Storage Devices:

•have extremely high storage density
•information contained is highly condensed since a highly focused laser beam is used to read/write the encoded information.
•less susceptible to contamination or deterioration.
>Varieties of Optical Storage Devices
•Compact disk read-only memory (CD-ROM)
•Compact disk, rewritable (CD-RW)
•Digital Video Disk (DVD). - offers higher quality and denser storage capabilities.

*Memory PC Card:
•also known as memory sticks
•They have been widely used, particularly in portable devices such as PDAs and smart phones.

>Peripheral Devices:
is a device attached to a host computer, but not part of it, and is more or less dependent on the host. It expands the host's capabilities, but does not form part of the core computer architecture.

Peripheral Input Devices:

*Keyboard:
One of the main input devices used on a computer, a PC's keyboard looks very similar to the keyboards of electric typewriters, with some additional keys. Below is a graphic of the Saitek Gamers' keyboard with indicators pointing to each of the major portions of the keyboard.

















*Mouse and Trackball:

>Mouse-A hardware input device that was invented by Douglas Engelbart in 1963, who at the time was working at the Stanford Research Institute, which was a think tank sponsored by Stanford University. The mouse allows an individual to control a pointer in a graphical user interface (GUI). Utilizing a mouse a user has the ability to perform various functions such as opening a program or file and does not require the user to memorize commands, like those used in a text-based command line environment such as MS-DOS.







>Trackball-Type of input device that looks like an upside-down mouse. The onscreen pointer is moved by the trackball with a thumb or finger. A trackball requires less arm and wrist motion that a regular mouse takes and therefore is often less stressful for the user to use, helping to prevent RSI.














*Touch Screen:
A monitor with a sensitive panel directly on the screen that registers the touch of a finger as input. Instead of being touch-sensitive, some types of touch screens also use beams across the screen to create a grid that is interrupted by the presence of a finger near the screen.






















*Stylus:
A pen-shaped instrument used with graphics tablets or touch screen input devices to write or draw on the computer screen, similar to a sheet of paper. Unlike a pen the stylus has a simple plastic tip and is often smaller to help fit in a compartment the device is used with.

















*Joysticks:
An input device that looks similar to a control device you would find on an arcade game at your local arcades. A joystick allows an individual to easily move an object in a game such as navigating a plane in a flight simulator.

















*POS Terminals:
•has a specialized keyboard.
•may include many features such as scanner, printer, voice synthesis (which pronounces the price by voice), and accounting software.
*Barcode Scanner:
A barcode reader (or barcode scanner) is an electronic device for reading printed barcodes. Like a flatbed scanner, it consists of a light source, a lens and a light sensor translating optical impulses into electrical ones. Additionally, nearly all barcode readers contain decodercircuitry analyzing the barcode's image data provided by the sensor and sending the barcode's content to the scanner's output port.















*RFID Tag:
Radio-frequency identification (RFID) is a technology that uses communication via radio waves to exchange data between a reader and an electronic tag attached to an object, for the purpose of identification and tracking. Some tags can be read from several meters away and beyond the line of sight of the reader. The application of bulk reading enables an almost parallel reading of tags.













*Optical Mark Reader:
Special scanner for detecting the presence of pencil marks on a predetermined grid, such as multiple-choice test answer sheets.
*Optical Character Reader:
•Optical Scanner
•coverts text and images on paper into digital form and stores the data on disk or other storage media.
Peripheral Output Devices:
*Printer:
a printer is a peripheral which produces a text and/or graphics of documents stored in electronic form, usually on physical print media such as paper or transparencies. Many printers are primarily used as local peripherals, and are attached by a printer cable or, in most newer printers, a USB cable to a computer which serves as a document source.





























*Monitor:
A monitor or display (sometimes called a visual display unit) is an electronic visual display for computers. The monitor comprises the display device, circuitry, and an enclosure. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) thin panel, while older monitors use a cathode ray tube about as deep as the screen size.

























*Plotter:
A plotter is a computer printing device for printing vector graphics. In the past, plotters were widely used in applications such as computer-aided design, though they have generally been replaced with wide-format conventional printers, and it is now commonplace to refer to such wide-format printers as "plotters," even though they technically aren't.
















>SOFTWARE:
is a collection of instructions that enables a user to interact with the computer or have the computer perform specific tasks for them. Without any type of software the computer would be useless. For example, you wouldn't be able to interact with the computer without a softwareoperating system. Almost all software purchased at a retail store and online is included in a box similar to the one shown to the right, this box usually contains all the disks (floppy diskette, CD,DVD, or Blu-ray) required to install the program onto the computer, manuals, warranty, and other important documentation.

-PROGRAMMING / CODING - the process of writing programs
-PROGRAMMERS - individuals who perform programming
-Computer programs include DOCUMENTATION, which is a written description of the functions of the program.

>TYPES of SOFTWARE:
*Application Software- also known as an application or an "app", is computer software designed to help the user to perform singular or multiple related specific tasks. It helps to solve problems in the real world. Examples include enterprise software,accounting software, office suites, graphics software, and media players.
*System Software- is computer software designed to operate the computer hardware and to provide and maintain a platform for running application software.
Types of Software vis-à-vis Hardware:















APPLICATION SOFTWARE

*Spreadsheet applications
-Used for creating documents to manage and organize numerical data











*Word processing applications
-Used for creating documents that are formatted and organized for readability











*Database applications
-Used for developing databases that can organize and retrieve large amounts of information














*Accounting applications
-Used for managing personal checkbooks, or the accounting functions of businesses.















*Activity management applications
-Such as calendars and address books












*Presentation applications
-Used for making slide shows













*Graphics applications
-Used for creating pictures










*Communications programs
-Such as e-mail, text messaging, and fax software for sending and receiving messages














*Multimedia applications
-Used for creating video and music














*Utilities or utility programs




>Categories of Application Software
*Off-the-shelf - can be purchased, leased, or rented from a vendor that develops programs and sells them to many organizations; may be a standard package or it may be customizable.
*Special purpose programs or “packages” - can be tailored for a specific purpose, such as inventory control or payroll.

*Other Application Software

Middleware:
Sometimes referred to as glue, middleware is a term used to describe a software program, service, or a portion of programming code that allows two or more software programs to communicate and work with each other.
Enterprise Applications :

Also known as enterprise application software (EAS), is software used in organizations, such as in a business or government, as opposed to software chosen by individuals (for example, retail software).

Services provided by enterprise software are typically business-oriented tools such as online shopping and online payment processing, interactive product catalogue, automated billing systems, security, content management, IT service management, customer relationship management, resource planning, business intelligence, HR management, manufacturing,application integration, and forms automation.

Presence Software :

Can detect when you’re online and what kind of device you’re using.


System Software


Categories of System Software:

>System control programs


*These are programs that control the use of hardware, software, and data resources of a computer system during its execution of a user’s information processing job.
*An operating system is the prime example of a system control program


>System support programs

*It supports the operations, management, and users of a computer system by providing a variety of services.
*System utility programs, performance monitors, and security monitors are examples of system support programs.


>System development programs


*It helps users develop information processing programs and procedures and prepare user applications.
*Major development programs are language compilers, interpreters, and translators.


PROGRAMMING LANGUAGES
>These are a set of symbols and rules used to write program code.
>The characteristics of the languages depend on their purpose.

Evolution of Programming Language:












*Machine Language( First Generation):
The lowest-level computer language, consisting of the internal representation of instructions and data.


*Assembly Language( Second Generation):

It is a more user-oriented language that represents instructions and data locations by using mnemonics, or memory aids.


*Procedural Language( Third Generation):

Much closer to so-called natural language (the way we talk) and therefore are easier to write, read, and alter.

*Nonprocedural Language( Fourth Generation):
Allows the user to specify the desired results without having to specify the detailed procedures needed to achieve the results.


*Natural Programming Language( Fifth Generation):

They are also referred to as intelligent languages.These languages are commonly used in the development of artificial intelligence applications.



WEB PROGRAM LANGUAGE AND SOFTWARE:
*Hypertext Markup Language (HTML) – the standard language the Web uses for creating and recognizing hypermedia documents

*JavaScript - an object-oriented scripting language developed that allows users to add some interactivity to their Web pages.

OPEN SOURCE SOFTWARE:

*Software made available in source code form at no cost to developers.

*Open source software is, in many cases, more reliable than proprietary software. Because the code is available to many developers, more bugs are discovered, are discovered early and quickly, and are fixed immediately.

Shareware and Freeware:

*Shareware - software where the user is expected to pay the author a modest amount for the privilege of using it.

*Freeware - software that is free.

*Shareware and freeware are often not as powerful (do not have the full complement of features) as the professional versions, but some users get what they need at a good price.