History of computing
Main article:
History of computer hardwareThe
Jacquard loom was one of the first programmable devices.
It is difficult to identify any one device as the earliest computer, partly because the term "computer" has been subject to varying interpretations over time. Originally, the term "computer" referred to a person who performed numerical calculations (a
human computer), often with the aid of a
mechanical calculating device.
The history of the modern computer begins with two separate technologies - that of automated calculation and that of programmability.
Examples of early mechanical calculating devices included the
abacus, the
slide rule and arguably the
astrolabe and the
Antikythera mechanism (which dates from about 150-100 BC). The end of the
Middle Ages saw a re-invigoration of European mathematics and engineering, and
Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers. However, none of those devices fit the modern definition of a computer because they could not be programmed.
Hero of Alexandria (c. 10 – 70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions - and when.
[3] This is the essence of programmability. In 1801,
Joseph Marie Jacquard made an improvement to the
textile loom that used a series of
punched paper cards as a template to allow his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.
It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837,
Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The
Analytical Engine".
[4] Due to limited finances, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine.
Large-scale automated data processing of punched cards was performed for the
U.S. Census in 1890 by
tabulating machines designed by
Herman Hollerith and manufactured by the
Computing Tabulating Recording Corporation, which later became
IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the
punched card,
Boolean algebra, the
vacuum tube (thermionic valve) and the
teleprinter.
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated
analog computers, which used a direct mechanical or
electrical model of the problem as a basis for
computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.
Defining characteristics of some early digital computers of the 1940s (In the
history of computing hardware)
Name
First operational
Numeral system
Computing mechanism
ProgrammingTuring completeZuse Z3 (
Germany)
May 1941
BinaryElectro-mechanicalProgram-controlled by punched
film stockYes (
1998)
Atanasoff–Berry Computer (
USA)
Summer 1941
BinaryElectronicNot programmable—single purpose
No
Colossus (
UK)
January 1944
BinaryElectronicProgram-controlled by patch cables and switches
No
Harvard Mark I – IBM ASCC (
USA)
1944
DecimalElectro-mechanicalProgram-controlled by 24-channel
punched paper tape (but no conditional branch)
Yes (
1998)
ENIAC (
USA)
November 1945
DecimalElectronicProgram-controlled by patch cables and switches
Yes
Manchester Small-Scale Experimental Machine (
UK)
June 1948
BinaryElectronicStored-program in
Williams cathode ray tube memoryYes
Modified ENIAC (
USA)
September 1948
DecimalElectronicProgram-controlled by patch cables and switches plus a primitive read-only stored programming mechanism using the Function Tables as program
ROMYes
EDSAC (
UK)
May 1949
BinaryElectronicStored-program in mercury
delay line memoryYes
Manchester Mark I (
UK)
October 1949
BinaryElectronicStored-program in
Williams cathode ray tube memory and
magnetic drum memory
Yes
CSIRAC (
Australia)
November 1949
BinaryElectronicStored-program in mercury
delay line memoryYes
A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by
Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult
(Shannon 1940). Notable achievements include:
EDSAC was one of the first computers to implement the stored program (
von Neumann) architecture.
Konrad Zuse's
electromechanical "Z machines". The
Z3 (1941) was the first working machine featuring
binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be
Turing complete, therefore being the world's first operational computer.
The non-programmable
Atanasoff–Berry Computer (1941) which used vacuum tube based
computation, binary numbers, and
regenerative capacitor memory.
The secret British
Colossus computers (1943)
[5], which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for
breaking German wartime codes.
The
Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.
The U.S. Army's
Ballistics Research Laboratory ENIAC (1946), which used
decimal arithmetic and is sometimes called the first general purpose
electronic computer (since
Konrad Zuse's
Z3 of 1941 used
electromagnets instead of
electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.
Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the stored program architecture or
von Neumann architecture. This design was first formally described by
John von Neumann in the paper "
First Draft of a Report on the EDVAC", published in 1945. A number of projects to develop computers based on the stored program architecture commenced around this time, the first of these being completed in
Great Britain. The first to be demonstrated working was the
Manchester Small-Scale Experimental Machine (SSEM) or "Baby". However, the
EDSAC, completed a year after SSEM, was perhaps the first practical implementation of the stored program design. Shortly thereafter, the machine originally described by von Neumann's paper—
EDVAC—was completed but did not see full-time use for an additional two years.
Nearly all modern computers implement some form of the stored program architecture, making it the single trait by which the word "computer" is now defined. By this standard, many earlier devices would no longer be called computers by today's definition, but are usually referred to as such in their historical context. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the
von Neumann architecture. The design made the universal computer a practical reality.
Microprocessors are miniaturized devices that often implement stored program
CPUs.
Vacuum tube-based computers were in use throughout the 1950s. Vacuum tubes were largely replaced in the 1960s by
transistor-based computers. When compared with tubes, transistors are smaller, faster, cheaper, use less power, and are more reliable. In the 1970s,
integrated circuit technology and the subsequent creation of
microprocessors, such as the
Intel 4004, caused another generation of decreased size and cost, and another generation of increased speed and reliability. By the 1980s, computers became sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as
washing machines. The 1980s also witnessed
home computers and the now ubiquitous
personal computer. With the evolution of the Internet, personal computers are becoming as common as the
television and the
telephone in the household.
Stored program architecture
Main articles:
Computer program and
Computer programmingThe defining feature of modern computers which distinguishes them from all other machines is that they can be
programmed. That is to say that a list of
instructions (the
program) can be given to the computer and it will store them and carry them out at some time in the future.
In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's
memory and are generally carried out (
executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or
branches). Furthermore, jump instructions may be made to happen
conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support
subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction.
Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the
flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.
Comparatively, a person using a
pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time—with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example: mov #0,sum ; set sum to 0
mov #1,num ; set num to 1
loop: add num,sum ; add num to sum
add #1,num ; add 1 to num
cmp num,#1000 ; compare num to 1000
ble loop ; if num <= 1000, go back to 'loop' halt ; end of program. stop running Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.
[6]However, computers cannot "think" for themselves in the sense that they only solve problems in exactly the way they are programmed to. An intelligent human faced with the above addition task might soon realize that instead of actually adding up all the numbers one can simply use the equation
and arrive at the correct answer (500,500) with little work.
[7] In other words, a computer programmed to add up the numbers one by one as in the example above would do exactly that without regard to efficiency or alternative solutions.
Programs
A 1970s
punched card containing one line from a
FORTRAN program. The card reads: "Z(1) = Y + W(1)" and is labelled "PROJ039" for identification purposes.
In practical terms, a
computer program might include anywhere from a dozen instructions to many millions of instructions for something like a
word processor or a
web browser. A typical modern computer can execute billions of instructions every second and nearly never make a mistake over years of operation.
Large computer programs may take teams of
computer programmers years to write and the probability of the entire program having been written completely in the manner intended is unlikely. Errors in computer programs are called
bugs. Sometimes bugs are benign and do not affect the usefulness of the program, in other cases they might cause the program to completely fail (
crash), in yet other cases there may be subtle problems. Sometimes otherwise benign bugs may be used for malicious intent, creating a
security exploit. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.
[8]In most computers, individual instructions are stored as
machine code with each instruction being given a unique number (its operation code or
opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from—each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer just as if they were numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the
Harvard architecture after the
Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in
CPU caches.
While it is possible to write computer programs as long lists of numbers (
machine language) and this technique was used with many early computers,
[9] it is extremely tedious to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember—a
mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's
assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Machine languages and the assembly languages that represent them (collectively termed
low-level programming languages) tend to be unique to a particular type of computer. For instance, an
ARM architecture computer (such as may be found in a
PDA or a
hand-held videogame) cannot understand the machine language of an
Intel Pentium or the
AMD Athlon 64 computer that might be in a
PC.
[10]Though considerably easier than in machine language, writing long programs in assembly language is often difficult and error prone. Therefore, most complicated programs are written in more abstract
high-level programming languages that are able to express the needs of the
computer programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a
compiler.
[11] Since high level languages are more abstract than assembly language, it is possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various
video game consoles.
The task of developing large
software systems is an immense intellectual effort. Producing software with an acceptably high reliability on a predictable schedule and budget has proved historically to be a great challenge; the academic and professional discipline of
software engineering concentrates specifically on this problem.
Example
A traffic light showing red.
Suppose a computer is being employed to drive a
traffic light. A simple stored program might say:
Turn off all of the lights
Turn on the red light
Wait for sixty seconds
Turn off the red light
Turn on the green light
Wait for sixty seconds
Turn off the green light
Turn on the yellow light
Wait for two seconds
Turn off the yellow light
Jump to instruction number (2)
With this set of instructions, the computer would cycle the light continually through red, green, yellow and back to red again until told to stop running the program.
However, suppose there is a simple on/off
switch connected to the computer that is intended to be used to make the light flash red while some maintenance operation is being performed. The program might then instruct the computer to:
Turn off all of the lights
Turn on the red light
Wait for sixty seconds
Turn off the red light
Turn on the green light
Wait for sixty seconds
Turn off the green light
Turn on the yellow light
Wait for two seconds
Turn off the yellow light
If the maintenance switch is NOT turned on then jump to instruction number 2
Turn on the red light
Wait for one second
Turn off the red light
Wait for one second
Jump to instruction number 11
In this manner, the computer is either running the instructions from number (2) to (11) over and over or its running the instructions from (11) down to (16) over and over, depending on the position of the switch.
[12]How computers work
Main articles:
Central processing unit and
MicroprocessorA general purpose computer has four main sections: the
arithmetic and logic unit (ALU), the
control unit, the
memory, and the input and output devices (collectively termed I/O). These parts are interconnected by
busses, often made of groups of
wires.
The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a
central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single
integrated circuit called a
microprocessor.
Control unit
Main articles:
CPU design and
Control unitThe control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer.
[13] Control systems in advanced computers may change the order of some instructions so as to improve performance.
A key component common to all CPUs is the
program counter, a special memory cell (a
register) that keeps track of which location in memory the next instruction is to be read from.
[14]Diagram showing how a particular
MIPS architecture instruction would be decoded by the control system.
The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
Read the code for the next instruction from the cell indicated by the program counter.
Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
Increment the program counter so it points to the next instruction.
Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
Provide the necessary data to an ALU or register.
If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
Write the result from the ALU back to a memory location or to a register or perhaps an output device.
Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of
control flow).
It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program - and indeed, in some more complex CPU designs, there is another yet smaller computer called a
microsequencer that runs a
microcode program that causes all of these events to happen.
Arithmetic/logic unit (ALU)
Main article:
Arithmetic logic unitThe ALU is capable of performing two classes of operations: arithmetic and logic.
The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing,
trigonometry functions (sine, cosine, etc) and
square roots. Some can only operate on whole numbers (
integers) whilst others use
floating point to represent
real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return
boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?").
Logic operations involve
Boolean logic:
AND,
OR,
XOR and
NOT. These can be useful both for creating complicated
conditional statements and processing
boolean logic.
Superscalar computers contain multiple ALUs so that they can process several instructions at the same time.
Graphics processors and computers with
SIMD and
MIMD features often provide ALUs that can perform arithmetic on
vectors and
matrices.
Memory
Main article:
Computer storageMagnetic core memory was popular main memory for computers through the 1960s until it was completely replaced by semiconductor memory.
A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is up to the software to give significance to what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store
binary numbers in groups of eight
bits (called a
byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in
two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called
registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. Since data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties:
random access memory or RAM and
read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the
BIOS that orchestrates loading the computer's
operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In
embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. Software that is stored in ROM is often called
firmware because it is notionally more like hardware than software.
Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required.
[15]In more sophisticated computers there may be one or more RAM
cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.
Input/output (I/O)
Main article:
Input/outputHard disks are common I/O devices used with computers.
I/O is the means by which a computer receives information from the outside world and sends results back. Devices that provide input or output to the computer are called
peripherals. On a typical
personal computer, peripherals include input devices like the keyboard and
mouse, and output devices such as the
display and
printer.
Hard disk drives,
floppy disk drives and
optical disc drives serve as both input and output devices.
Computer networking is another form of I/O.
Often, I/O devices are complex computers in their own right with their own CPU and memory. A
graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display
3D graphics[
citation needed]. Modern
desktop computers contain many smaller computers that assist the main CPU in performing I/O.
Multitasking
Main article:
Computer multitaskingWhile a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an
interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.
Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly - in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.
Multiprocessing
Main article:
MultiprocessingCray designed many supercomputers that used multiprocessing heavily.
Some computers may divide their work between one or more separate CPUs, creating a multiprocessing configuration. Traditionally, this technique was utilized only in large and powerful computers such as
supercomputers,
mainframe computers and
servers. However, multiprocessor and
multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers have become widely available and are beginning to see increased usage in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.
[16] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale
simulation,
graphics rendering, and
cryptography applications, as well as with other so-called "
embarrassingly parallel" tasks.
Networking and the Internet
Main articles:
Computer networking and
InternetVisualization of a portion of the
routes on the Internet.
Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's
SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like
Sabre.
In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now
DARPA), and the
computer network that it produced was called the
ARPANET. The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the
Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like
e-mail and the
World Wide Web, combined with the development of cheap, fast networking technologies like
Ethernet and
ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of
personal computers regularly connect to the
Internet to communicate and receive information. "Wireless" networking, often utilizing
mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.
Further topics
Hardware
Main article:
Computer hardwareThe term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.
History of computing hardwareFirst Generation (Mechanical/Electromechanical)
Calculators
Antikythera mechanism,
Difference Engine,
Norden bombsightProgrammable Devices
Jacquard loom,
Analytical Engine,
Harvard Mark I,
Z3Second Generation (Vacuum Tubes)
Calculators
Atanasoff–Berry Computer,
IBM 604,
UNIVAC 60,
UNIVAC 120Programmable DevicesColossus,
ENIAC,
Manchester Small-Scale Experimental Machine,
EDSAC,
Manchester Mark I,
CSIRAC,
EDVAC,
UNIVAC I,
IBM 701,
IBM 702,
IBM 650,
Z22Third Generation (Discrete transistors and SSI, MSI, LSI
Integrated circuits)
MainframesIBM 7090,
IBM 7080,
System/360,
BUNCHMinicomputerPDP-8,
PDP-11,
System/32,
System/36Fourth Generation (VLSI integrated circuits)
Minicomputer
VAX,
IBM System i4-bit microcomputer
Intel 4004,
Intel 40408-bit microcomputer
Intel 8008,
Intel 8080,
Motorola 6800,
Motorola 6809,
MOS Technology 6502,
Zilog Z8016-bit microcomputer
Intel 8088,
Zilog Z8000,
WDC 65816/6580232-bit microcomputer
Intel 80386,
Pentium,
Motorola 68000,
ARM architecture64-bit microcomputer
[17]x86-64,
PowerPC,
MIPS,
SPARCEmbedded computerIntel 8048,
Intel 8051Personal computerDesktop computer,
Home computer,
Laptop computer,
Personal digital assistant (PDA),
Portable computer,
Tablet computer,
Wearable computerTheoretical/experimental
Quantum computer,
Chemical computer,
DNA computing,
Optical computer,
Spintronics based computerOther Hardware Topics
Peripheral device (
Input/output)
Input
Mouse,
Keyboard,
Joystick,
Image scannerOutput
Monitor,
PrinterBoth
Floppy disk drive,
Hard disk,
Optical disc drive,
TeleprinterComputer bussesShort range
RS-232,
SCSI,
PCI,
USBLong range (
Computer networking)
Ethernet,
ATM,
FDDISoftware
Main article:
Computer softwareSoftware refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. When software is stored in hardware that cannot easily be modified (such as
BIOS ROM in an
IBM PC compatible), it is sometimes called "firmware" to indicate that it falls into an uncertain area somewhere between hardware and software.
Computer softwareOperating systemUnix/
BSDUNIX System V,
AIX,
HP-UX,
Solaris (
SunOS),
IRIX,
List of BSD operating systemsGNU/
LinuxList of Linux distributions,
Comparison of Linux distributionsMicrosoft WindowsWindows 95,
Windows 98,
Windows NT,
Windows 2000,
Windows XP,
Windows Vista,
Windows CEDOS86-DOS (QDOS),
PC-DOS,
MS-DOS,
FreeDOSMac OSMac OS classic,
Mac OS XEmbedded and
real-timeList of embedded operating systemsExperimental
Amoeba,
Oberon/
Bluebottle,
Plan 9 from Bell LabsLibraryMultimediaDirectX,
OpenGL,
OpenALProgramming library
C standard library,
Standard template libraryDataProtocolTCP/IP,
Kermit,
FTP,
HTTP,
SMTPFile formatHTML,
XML,
JPEG,
MPEG,
PNGUser interfaceGraphical user interface (
WIMP)
Microsoft Windows,
GNOME,
KDE,
QNX Photon,
CDE,
GEMText user interfaceCommand line interface,
shellsApplicationOffice suiteWord processing,
Desktop publishing,
Presentation program,
Database management system, Scheduling & Time management,
Spreadsheet,
Accounting softwareInternet Access
Browser,
E-mail client,
Web server,
Mail transfer agent,
Instant messagingDesign and manufacturing
Computer-aided design,
Computer-aided manufacturing, Plant management, Robotic manufacturing, Supply chain management
GraphicsRaster graphics editor,
Vector graphics editor,
3D modeler,
Animation editor,
3D computer graphics,
Video editing,
Image processingAudioDigital audio editor,
Audio playback, Mixing,
Audio synthesis,
Computer musicSoftware EngineeringCompiler,
Assembler,
Interpreter,
Debugger,
Text Editor,
Integrated development environment,
Performance analysis,
Revision control,
Software configuration managementEducational
Edutainment,
Educational game,
Serious game,
Flight simulatorGamesStrategy, Arcade,
Puzzle, Simulation,
First-person shooter,
Platform,
Massively multiplayer,
Interactive fictionMisc
Artificial intelligence,
Antivirus software,
Malware scanner,
Installer/
Package management systems,
File managerProgramming languages
Programming languages provide various ways of specifying programs for computers to run. Unlike
natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into
machine language by a
compiler or an
assembler before being run, or translated directly at run time by an
interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of different programming languages—some intended to be general purpose, others useful only for highly specialized applications.
Programming LanguagesLists of programming languages
Timeline of programming languages,
Categorical list of programming languages,
Generational list of programming languages,
Alphabetical list of programming languages,
Non-English-based programming languagesCommonly used
Assembly languagesARM,
MIPS,
x86Commonly used
High level languagesBASIC,
C,
C++,
C#,
COBOL,
Fortran,
Java,
Lisp,
PascalCommonly used
Scripting languagesBourne script,
JavaScript,
Python,
Ruby,
PHP,
PerlProfessions and organizations
As the use of computers has spread throughout society, there are an increasing number of careers involving computers. Following the theme of hardware, software and firmware, the brains of people who work in the industry are sometimes known irreverently as wetware or "meatware".
Computer-related professionsHardware-related
Electrical engineering,
Electronics engineering,
Computer engineering,
Telecommunications engineering,
Optical engineering,
Nanoscale engineeringSoftware-related
Computer science,
Human-computer interaction,
Information technology,
Software engineering,
Scientific computing,
Web design,
Desktop publishingThe need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.
OrganizationsStandards groups
ANSI,
IEC,
IEEE,
IETF,
ISO,
W3CProfessional Societies
ACM,
ACM Special Interest Groups,
IET,
IFIPFree/
Open source software groups
Free Software Foundation,
Mozilla Foundation,
Apache Software FoundationSee also
Look up
Computer in
Wiktionary, the free dictionary.
Wikiquote has a collection of quotations related to:
ComputersWikimedia Commons has media related to:
ComputerComputability theoryComputer scienceComputingComputers in fictionComputer security and
Computer insecurityElectronic wasteList of computer term etymologiesVirtualizationNotes
^ In 1946,
ENIAC consumed an estimated 174 kW. By comparison, a typical personal computer may use around 400 W; over four hundred times less.
(Kempf 1961)^ Early computers such as
Colossus and
ENIAC were able to process between 5 and 100 operations per second. A modern "commodity" microprocessor (as of 2007) can process billions of operations per second, and many of these operations are more complicated and useful than early computer operations.
^ Heron of Alexandria. Retrieved on
2008-
01-15.
^ The Analytical Engine should not be confused with Babbage's
difference engine which was a non-programmable mechanical calculator.
^ B. Jack Copeland, ed., Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford University Press, 2006
^ This program was written similarly to those for the
PDP-11 minicomputer and shows some typical things a computer can do. All the text after the semicolons are
comments for the benefit of human readers. These have no significance to the computer and are ignored.
(Digital Equipment Corporation 1972)^ Attempts are often made to create programs that can overcome this fundamental limitation of computers. Software that mimics learning and adaptation is part of
artificial intelligence.
^ It is not universally true that bugs are solely due to programmer oversight. Computer hardware may fail or may itself have a fundamental problem that produces unexpected results in certain situations. For instance, the Pentium FDIV bug caused some Intel microprocessors in the early 1990s to produce inaccurate results for certain floating point division operations. This was caused by a flaw in the microprocessor design and resulted in a partial recall of the affected devices.
^ Even some later computers were commonly programmed directly in machine code. Some minicomputers like the DEC PDP-8 could be programmed directly from a panel of switches. However, this method was usually used only as part of the booting process. Most modern computers boot entirely automatically by reading a boot program from some non-volati memory.
^ However, there is sometimes some form of machine language compatibility between different computers. An x86-64compatible microprocessor like the AMD Athlon 64 is able to run most of the same programs that an Intel Core 2 microprocessor can, as well as programs designed for earlier microprocessors like the Intel Pentiums and Intel 80486. This contrasts with very early commercial computers, which were often one-of-a-kind and totally incompatible with other computers.
^ High level languages are also often interpretedrather than compiled. Interpreted languages are translated into machine code on the fly by another program called an interpreter.
^ Although this is a simple program, it contains a software bug. If the traffic signal is showing red when someone switches the "flash red" switch, it will cycle through green once more before starting to flash red as instructed. This bug is quite easy to fix by changing the program to repeatedly test the switch throughout each "wait" period—but writing large programs that have no bugs is exceedingly difficult.
^ The control unit's rule in interpreting instructions has varied somewhat in the past. While the control unit is solely responsible for instruction interpretation in most modern computers, this is not always the case. Many computers include some instructions that may only be partially interpreted by the control system and partially interpreted by another device. This is especially the case with specialized computing hardware that may be partially self-contained. For example, EDVAC, the first modern stored program computer to be designed, used a central control unit that only interpreted four instructions. All of the arithmetic-related instructions were passed on to its arithmetic unit and further decoded there.
^ Instructions often occupy more than one memory address, so the program counters usually increases by the number of memory locations required to store one instruction.
^ Flash memory also may only be rewritten a limited number of times before wearing out, making it less useful for heavy random access usage. (Verma 1988)
^ However, it is also very common to construct supercomputers out of many pieces of cheap commodity hardware; usually individual computers connected by networks. These so-called computer clusters can often provide supercomputer performance at a much lower cost than customized designs. While custom architectures are still used for most of the most powerful supercomputers, there has been a proliferation of cluster computers in recent years.
^ Most major 64-bit instruction set architectures are extensions of earlier designs. All of the architectures listed in this table existed in 32-bit forms before their 64-bit incarnations were introduced.
References
a Kempf, Karl (1961). "Historical Monograph: Electronic Computers Within the Ordnan Corps". Aberdeen Proving Ground (United States Army).
a Phillips, Tony (2000). The Antikythera Mechanism I. American Mathematical Society. Retrieved on 2006-04-05.
a Shannon, Claude Elwood (1940). "A symbolic analysis of relay and switching circuits". Massachusetts Institute of Technology.
a Digital Equipment Corporation (1972). PDP-11/40 Processor Handbook(PDF), Maynard, MA: Digital Equipment Corporation.
a Verma, G.; Mielke, N. (1988). "Reliability performance of ETOX based flash memories". IEEE International Reliability Physics Symposium.
a Meuer, Hans; Strohmaier, Erich; Simon, Horst; Stokes, Jon (2007). Inside the Machine: An Illustrated Introduction to Microprocessors and Computer Architecture. San Francisco: No Starch Press. ISBN 978-1-59327-104-6.