A computer. Whether an old 386 or a l33t Duron 650 @ 1GHz, they both can be divided into three main units :
|| : Jan 6th, 2001
|| : Programming
|| : N/A
|| : Jin-Wei Tioh
Arithmetic Logic Unit (ALU)
This unit performs the basic functions of data transfer, arithmetic operations, and decision making. Data trnasfer involves the process of moving data from one location to another within the computer. Arithmetic operations include addition, subtraction, multiplication and division. The arithmetic element in the ALU is called an adder. Decision-making is the ability to make quantitative comparisons.
This is responsible for initiation and control of sequence of operations within the computer. It communicates with input and output devices to initiate data transfers with control signals. However, it does not input, output, process or store data.
The memory unit encompasses two main areas, ROM (Read Only Memory) and RAM (Random Access Memory). ROM is non-volatile, storing permanent, non-alterable programs. However, some variants such as EPROM (Erasable Programmable Read Only Memory) do allow contents to be modified, albeit via special means such as EPROM programmers. RAM is referred to as primary storage. It serves as a temporary storage area for data, programs and the operating system.
So what is programming all about? Simply put, it is creating a step of clear and logical instructions that control the CPU to solve a particular problem. You can't just drive a car. To drive a car, you have to do many tasks such as turning the ignition, shifting gears and pressing down on the accelerator pedal. Similarly, you can't just ask a computer to retrieve your e-mail. Computers are inherently dumb and they have to adhere to a sequence of instructions in order to accomplish something.
Industrially, that is the basis of programming. However, there are other elements involved such the system design cycle. Industrial-grade programming also requires an abundant amount of discipline and planning in order for the programs to be reliable, robust and easily maintainable.
Why program? Certainly, there are already enough good, ready-made programs in the world? However, these programs were designed with specific purposes in mind. When a problem involves unusual or specialized needs, a program must usually be written to solve it. You don't need to look far for examples. For instance, programs such as WinAmp, ICQ or Motherboard Monitor were written in this same spirit.
So you want to program. But how do you go about "writing" the instructions for the CPU? To do so, you have to write instructions in a programming language. There are hundreds of programming languages including Assembly, Basic, Perl, Pascal, Python, FORTRAN, C, C++, and Java. That's a lot of languages, but they can be divided into three general types :
- Machine languages
- Assembly languages
- High-level languages
Machine language is the lingua franca of a particular type of computer. It is tied in closely with the hardware design of that computer, which means it is unavoidably machine-dependant. For example, Cyrix, AMD or Intel-based computers are fundamentally similar to each other because the CPUs from these three companies adhere to the x86 instruction set, which is the machine language for these processor. They are extremely cumbersome for humans, as you can see below in the machine language program that basically adds the integer value 23 to the 8-bit value in the accumulator :
Obviously, machine language programming is very cryptic, tedious, and doesn't really lend itself to maintainability. Thus, one of the earliest programming aids, called assemblers were created. They allowed the use of mnemonics, English-like abbreviations, to represent the elementary operations of a computer, in place of the numeric codes of machine language. These assemblers would translate the user written mnemonics into machine language. Thus, this type of programming was termed assembly language programming. The instruction to add 23 to the 8-bit value in the accumulator are clearer in assembly language :
Although assembly language is certainly an improvement over machine language, its readability is still lacking. Many instructions are required to accomplish even the simplest task. Moreover, both machine and assembly languages still have the problem of being machine dependant, which makes them non-portable, that is, a program written for one type of computer cannot be used on another without major alterations. Thus, these languages are termed low-level languages.
To solve the shortcomings of low-level languages, high-level languages were developed. Whereas a low-level language concentrates on the capabilities of the underlying hardware, a high-level language shifts the focus to the problem solving process. The translator programs that convert instructions in high-level language programs into the appropriate machine language are called compilers. This opens the doors to program portability, because besides minimal modifications, a programmer need only to recompile the program to use it on another platform. Another distinct feature of high-level languages is the use of everyday English and mathematical notations. Going back to the problem of storing the sum of two numbers in a variable, a high-level language solution might look like this:
result = 3 + 8
Most of the examples cited earlier, such as Perl, Pascal, FORTRAN, C, C++, and Java are high-level languages. C and C++ are among the most powerful and widely adopted languages. In this series, the primary focus will be on the C programming language, with emphasis on object oriented programming using C++ later on. C++ is a superset of C. Moreover, sufficient knowledge of structured programming is important as it will still be applied later on in object oriented programming.
A History of C
C was originally developed in the 1970's by Dennis Ritchie at Bell Telephone Laboratories, Inc. (now AT&T Bell Laboratories). It is an outgrowth of two earlier languages, BCPL and B, which were also developed at Bell Laboratories. C was largely confined to use within Bell Laboratories until 1978, when Brian Kernighan and Ritchie published a definitive description of the language.
Computer professionals, impressed with C's many desirable features, began to promote the use of the language. By the mid 80's, the popularity of C had become widespread. Numerous C compilers and interpreters had been written for computers of all sizes and numerous commercial application programs had been developed. Moreover, many commercial software products that were originally written in other languages were rewritten in C in order to take advantage of its efficiency and its portability.
Most commercial implementations of C differ somewhat from K&R's original definition. This has created some minor incompatibilities among different implementations of the language, thus diminishing the portability that the language attempts to provide. Consequently, the American National Standards Institute (ANSI) begun work on a standardized definition of the C language (ANSI committee XMI 1). Most commercial C compilers and interpreters are expected to adopt the ANSI standard once it has been completed (many now follow the partially completed recommendations). They may also provide additional features of their own.
This concludes the introduction for the series. Stay tuned for the next article...