Saturday, August 23, 2008

super computer


A supercomputer is a computer that is at the frontline of processing capacity, particularly speed of calculation (at the time of its introduction). The term "Super Computing" was first used by New York World newspaper in 1929[1] to refer to large custom-built tabulators that IBM had made for Columbia University.
Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). Cray, himself, never used the word "supercomputer"; a little-remembered fact is that he only recognized the word "computer". In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and HP, who had purchased many of the 1980s companies to gain their experience.
................................................................................................................................................................................................................................

Common uses :
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Major universities, military agencies and scientific research laboratories are heavy users.
A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
=========================================================================================================

Hardware and software design:
Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.
As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.

=====================================================================================================================

Supercomputer challenges, technologies:
A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason: hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1-5 microseconds to send a message between CPUs are typical.
Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
===========================================================================================================

Modern supercomputer architecture:
As of November 2006, the top ten supercomputers on the Top500 list (and indeed the bulk of the remainder of the list) have the same top-level architecture. Each of them is a cluster of MIMD multiprocessors, each processor of which is SIMD. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMD processor. Within this hierarchy we have:
A computer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric. Each computer runs under a separate instance of an Operating System (OS).
A multiprocessing computer is a computer, operating under a single OS and using more than one CPU, where the application-level software is indifferent to the number of processors. The processors share tasks using Symmetric multiprocessing (SMP) and Non-Uniform Memory Access (NUMA).
A SIMD processor executes the same instruction on more than one set of data at the same time. The processor could be a general purpose commodity processor or special-purpose vector processor. It could also be high performance processor or a low power processor. As of 2007, the processor executes several SIMD instructions per nanosecond.
As of July 2008 the fastest machine is IBM Roadrunner. This machine is a cluster of 3240 computers, each with 40 processing cores. By contrast, Columbia is a cluster of 20 machines, each with 512 processors, each of which processes two data streams concurrently.
Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a 15-year old supercomputer, and the design concepts that allowed past supercomputers to out-perform contemporaneous desktop machines have now been incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most workloads requiring such a supercomputer in the 1990s can now be done on workstations costing less than 4,000 US dollars.
Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design which can be programmed to act as one large computer.

===========================================================================================================

Current fastest supercomputer system:
On June 8, 2008, the Cell/AMD Opteron-based IBM Roadrunner at the Los Alamos National Laboratory (LANL) was announced as the fastest operational supercomputer, with a sustained processing rate of 1.026 PFLOPS.[2][3] However, Roadrunner was then taken out of service to be shipped to its new home.























DBMS

A DBMS is a complex set of software programs that controls the organization, storage, management, and retrieval of data in a database. DBMS are categorized according to their data structures or types, some time DBMS is also known as Data base Manager. It is a set of prewritten programs that are used to store, update and retrieve a Database. A DBMS includes:

A modeling language to define the schema of each database hosted in the DBMS, according to the DBMS data model.
The four most common types of organizations are the hierarchical, network, relational and object models. Inverted lists and other methods are also used. A given database management system may provide one or more of the four models. The optimal structure depends on the natural organization of the application's data, and on the application's requirements (which include transaction rate (speed), reliability, maintainability, scalability, and cost).
The dominant model in use today is the ad hoc one embedded in SQL, despite the objections of purists who believe this model is a corruption of the relational model, since it violates several of its fundamental principles for the sake of practicality and performance. Many DBMSs also support the Open Database Connectivity API that supports a standard way for programmers to access the DBMS.
Data structures (fields, records, files and objects) optimized to deal with very large amounts of data stored on a permanent data storage device (which implies relatively slow access compared to volatile main memory).
A database query language and report writer to allow users to interactively interrogate the database, analyze its data and update it according to the users privileges on data.
It also controls the security of the database.
Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called subschemas. For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data.
If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases. However, it may not leave an audit trail of actions or provide the kinds of controls necessary in a multi-user organization. These controls are only available when a set of application programs are customized for each data entry and updating function.
  • The DBMS accepts requests for data from the application program and instructs the operating system to transfer the appropriate data.
    When a DBMS is used, information systems can be changed much more easily as the organization's information requirements change. New categories of data can be added to the database without disruption to the existing system
  • DBMS benefits
    Improved strategic use of corporate data
    Reduced complexity of the organization’s information systems environment
    Reduced data redundancy and inconsistency
    Enhanced data integrity
    Application-data independence
    Reduced application development and maintenance costs
    Improved flexibility of information systems
    Increased access and availability of data and information
    Logical & Physical data independence
    Concurrent access anomalies.
    Facilitates atomicity problem.
    Provides central control on the system through DBA.



















COMPUTER SCIENCE

Computer science (or computing science) is the study and the science of the theoretical foundations of information and computation and their implementation and application in computer systems.[1][2][3] Computer science has many sub-fields; some emphasize the computation of specific results (such as computer graphics), while others relate to properties of computational problems (such as computational complexity theory). Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describing computations, while computer programming applies specific programming languages to solve specific computational problems. A further subfield, human-computer interaction, focuses on the challenges in making computers and computations useful, usable and universally accessible to people.



Applications within computer science
A formal definition of computation and computability, and proof that there are computationally unsolvable and intractable problems.[10]
The concept of a programming language, a tool for the precise expression of methodological information at various levels of abstraction.[11]
Applications outside of computing
Sparked the Digital Revolution which led to the current Information Age and the Internet.[12]
In cryptography, breaking the Enigma machine was an important factor contributing to the Allied victory in World War II.[9]
Scientific computing enabled advanced study of the mind and mapping the human genome was possible with Human Genome Project.[12] Distributed computing projects like protein folding.
Algorithmic trading has increased the efficiency and liquidity of financial markets by using artificial intelligence, machine learning and other statistical and numerical techniques on a large scale.





Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software.[1] It encompasses techniques and procedures, often regulated by a software development process, with the purpose of improving the reliability and maintainability of software systems.[2] The effort is necessitated by the potential complexity of those systems, which may contain millions of lines of code.[3]
The term software engineering was coined by Brian Randell and popularized by F.L. Bauer during the NATO Software Engineering Conference in 1968.[4] The discipline of software engineering includes knowledge, tools, and methods for software requirements, software desig, software construction, software testing, and software maintenance tasks.[5] Software engineering is related to the disciplines of computer science, computer engineering, management, mathematics, project management, quality management, software ergonomics, and systems engineering.[6]
In 2004, the U. S. Bureau of Labor Statistics counted 760,840 software engineers holding jobs in the U.S.; in the same time period there were some 1.4 million practitioners employed in the U.S. in all other engineering disciplines combined.[7] Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and as a result most software engineers hold computer science degrees.[8]The term software engineer is used very liberally in the corporate world. Very few of the practicing software engineers actually hold Engineering degrees from accredited universities. In fact, according to the Association for Computing Machinery, "most people who now function in the U.S. as serious software engineers have degrees in computer science, not in software engineering".

history of programming


The earliest programmable machine (that is a machine whose behavior can be controlled by changes to a "program") was Al-Jazari's programmable humanoid robot in 1206. Al-Jazari's robot was originally a boat with four automatic musicians that floated on a lake to entertain guests at royal drinking parties. His mechanism had a programmable drum machine with pegs (cams) that bump into little levers that operate the percussion. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locaations .

The Jacquard Loom, developed in 1801, is often quoted as a source of prior art. The machine used a series of pasteboard cards with holes punched in them. The hole pattern represented the pattern that the loom had to follow in weaving cloth. The loom could produce entirely different weaves using different sets of cards. The use of punched cards was also adopted by Charles Babbage around 1830, to control his Analytical Engine.

This innovation was later refined by Herman Hollerith who, in 1896 founded the Tabulating Machine Company (which became IBM). He invented the Hollerith punched card, the cardreader, and the key punch machine. These inventions were the foundation of the modern information processing industry. The addition of a plug-board to his 1906 Type I Tabulator allowed it to do different jobs without having to be rebuilt (the first step toward programming). By the late 1940s there were a variety of plug-board programmable machines, called unit record equipment, to perform data processing tasks (card reading). The early computers were also programmed using plug-boards.
The invention of the Von Neumann architecture allowed computer programs to be stored in computer memory. Early programs had to be painstakingly crafted using the instructions of the particular machine, often in binary notation. Every model of computer would be likely to need different instructions to do the same task. Later assembly languages were developed that let the programmer specify each instruction in a text format, entering abbreviations for each operation code instead of a number and specifying addresses in symbolic form (e.g. ADD X, TOTAL). In 1954 Fortran, the first higher level programming language, was invented. This allowed programmers to specify calculations by entering a formula directly (e.g. Y = X*2 + 5*X + 9). The program text, or source, was converted into machine instructions using a special program called a compiler. Many other languages were developed, including ones for commercial programming, such as COBOL. Programs were mostly still entered using punch cards or paper tape. (See computer programming in the punch card era). By the late 1960s, data storage devices and computer terminals became inexpensive enough so programs could be created by typing directly into the computers. Text editors were developed that allowed changes and corrections to be made much more easily than with punch cards.

As time has progressed, computers have made giant leaps in the area of processing power. This has brought about newer programming languages that are more abstracted from the underlying hardware. Although these more abstracted languages require additional overhead, in most cases the huge increase in speed of modern computers has brought about little performance decrease compared to earlier counterparts. The benefits of these more abstracted languages is that they allow both an easier learning curve for people less familiar with the older lower-level programming languages, and they also allow a more experienced programmer to develop simple applications quickly. Despite these benefits, large complicated programs, and programs that are more dependent on speed still require the faster and relatively lower-level languages with today's hardware. (The same concerns were raised about the original Fortran language.)
Throughout the second half of the twentieth century, programming was an attractive career in most developed countries. Some forms of programming have been increasingly subject to offshore outsourcing (importing software and services from other countries, usually at a lower wage), making programming career decisions in developed countries more complicated, while increasing economic opportunities in less developed areas. It is unclear how far this trend will continue and how deeply it will impact programmer wages and opportunities.


























programming

Computer programming (often shortened to programming or coding), sometimes considered a branch of applied mathematics, is the process of writing, testing, debugging/troubleshooting, and maintaining the source code of computer programs. This source code is written in a programming language. The code may be a modification of an existing source or something completely new. The purpose of programming is to create a program that exhibits a certain desired behavior (customization). The process of writing source code requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms and formal logic.
Said another way, programming is the craft of transforming requirements into something that a computer can execute.

Within software engineering, programming (the implementation) is regarded as one phase in a software development process.
There is an ongoing debate on the extent to which the writing of programs is an art, a craft or an engineering discipline.[1] Good programming is generally considered to be the measured application of all three, with the goal of producing an efficient and maintainable software solution (the criteria for "efficient" and "maintainable" vary considerably). The discipline differs from many other technical professions in that programmers generally do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers".

Another ongoing debate is the extent to which the programming language used in writing programs affects the form that the final program takes. This debate is analogous to that surrounding the Sapir-Whorf hypothesis [2] in linguistics, that postulates that a particular language's nature influences the habitual thought of its speakers. Different language patterns yield different patterns of thought. This idea challenges the possibility of representing the world perfectly with language, because it acknowledges that the mechanisms of any language condition the thoughts of its speaker community





Sunday, August 17, 2008

advantage of computers in business













  1. Computers are a multimedia tool. With integrated graphic, print, audio, and video capabilities, computers can effectively link various technologies.
  2. Interactive video and CD-ROM technologies can be incorporated into computer-based instructional units. Computers are interactive

  3. Microcomputer systems incorporating various software packages are extremely flexible and maximize control.
  4. Computer technology is rapidly advancing.
  5. Innovations are constantly emerging, while related costs drop.
  6. Computers increase access. Local, regional, and national networks link resources and individuals, wherever they might be.


Computers may be the cause of technology being used wrongly e.g. fierce competition in discovery learning by undermining other people's work.

  • Computers will become inoperable due to a power failure leaving large companies at a stand still e.g. water-pumps production, vehicle making.
  • To a small extent, computers may lead to redundancy. Clerical work may become more straightforward to perform by using specially designed software.
  • Although this may happen, it never materialized to the great degree forecasted in the beginning of the 80s when the first technology boom was at its peak.









USES OF COMPUTERS IN BANKING:
E-Banking:

Electronic banking, also known as electronic fund transfer (EFT), uses computer and electronic technology as a substitute for checks and other paper transactions. ATM:
Automated Teller Machines or 24-hour Tellers are electronic terminals that let you bank almost any time. To withdraw cash, make deposits, or transfer funds between accounts, you generally insert an ATM card and enter your PIN. ATM use is increasing over the years.
DIRECT DEPOSIT:
Direct Deposit lets you authorize specific deposits, such as paychecks and Social Security checks, to your account on a regular basis. PC BANKING:
Personal Computer Banking lets you handle many banking transactions via your personal computer. You may use your computer to view your account balance, request transfers between accounts, and pay bills electronically. Point-of-Sale Transfers:
Lets you pay for purchases with a debit card, which also may be your ATM card.
BACK