Monday, July 11, 2005

The Development of Information Processing

Mankind has always communicated, and earlier than many admit, by written language. But the invention of the printing press launched a major change in the spread of knowledge, because it was so much more efficient than hand-copied books and traveling teachers and story-tellers. More recently, the Internet has accelerated the spread of knowledge more than ever.

But we have discovered how to do much more than simply reproduce and distribute information efficiently. Perhaps it began when clockmakers figured out how to put short and long notches on a wheel to control the chiming of a clock. Or when the player piano was invented, where holes on a roll of paper control the sequence and timing of the notes played. Other machinery was made to robotically play drums, violins, horns, and other instruments. All these machines translated recorded information into sound. Then the phonograph was invented, which translated sound into recorded information, and afterward translated it back to sound as often as desired. Then came the telephone, which translated sound to an electrical form that could be transported over long distances without recording and playback.

In some of these examples, you can say that the recorded information was translated into mechanical action. For example, the player piano roll controlled the striking of the piano keys. Perhaps this was the inspiration for machines that automated the weaving of tapestry designs -- punched holes controlled whether threads were lifted above or dropped below the path of the shuttle of the loom. Later, punched paper tape was used to control machines that could drill any set of holes in a part to be manufactured, or robotically apply any set of rotary tools to a manufacturing task. These all translate recorded information into a sequence of actions.

Other people were interested in just processing the information, that is, calculating. Astronomers and other scientists relied on long, tedious, and error-prone calculations. Much of the general-purpose calculations could be prepared beforehand and stockpiled (like prepared foods) -- for example, a table of square roots, or trigonometric functions. So people created adding (and subtracting) machines, multiplying (and dividing) machines, and 'difference engines' to generate and use these tables. These machines translated information (such as "30x31") into a useful equivalent of the information (such as 930). The methodology was mechanical actions (for example, rotating digit wheels), but the overall function was information in and equivalent (derived) information out.

So far, all of the types of machines mentioned use a single sequence of information, except when the operator intervenes by choosing the sequence -- choosing the song to be played, or the hole pattern to be drilled, or the formula to be calculated.

Now, what if the machine could control its own sequence? For example, the music player could play the verse, then chorus, change key, play the verse and chorus again, increase the volume and repeat the chorus. The drilling machine could drill 30 boards with pattern A, then 50 boards with pattern B. Or the calculating machine could compute formula A, and if the result is positive, compute formula B, else formula C. And repeat this for another set of data, and another, until 50 sets of data have been processed.

The concept of sequence control, or self-control, or the machine talking to itself, so to speak -- thrust information processing into the computer age. First attempts where mechanical, then vacuum tubes, then transistors. That's the stage where I first got involved. You could see the transistors back then, because they weren't miniaturized yet. Today, millions of transistors are packed into one small package.

It was obvious that many of the things these machines were designed to do were similar to human activities, so words like read, write, memory, and decision were used to describe machine functions. We knew we were trying to emulate human thinking, difficult as it was, and still is.

When computers were developed, the machines became more general-purpose. That's because the machines now had two kinds of information: the information being processed (data), and the information that controlled the processing (software). The visible machine (the hardware) could do almost any processing, given suitable software. Give it word processing software, and the machine becomes a word processor. Give it accounting software, and it becomes an accounting processor. Give it telephone control software and hide it inside a telephone, and you have a 'smart' telephone. And don't tell the consumer that there's a computer in his telephone, lest he be afraid to use it.

At some point along the way, the designers realized that the information that they were putting into these machines was actually language. It was strange languages designed to best fit the design of the machines, but still, it was language. It was difficult and error-prone to write machine language, so more human-like languages were designed that could be translated (by computer, of course) into machine language. A simple example of 'programming language':

X:= 0; repeat X:= X+1 until X > 9;

Translation to real English: Set the data called 'X' to zero, then keep adding one to it until it is greater than nine.

This programming language gets translated into machine language for use by the computer hardware. I won't show it to you -- trust me, it just looks like gibberish.
___________________

Contemporary with the computer scientists, scientists of biology were discovering DNA, and RNA, and began unraveling the mysteries of the machinery of life. The DNA, they found, was another kind of machine language. Whereas our computers use an alphabet of 0 and 1 (zero and one), the DNA uses an alphabet of A, G, C, and T, which name the acids Adenine, Guanine, Cytosine, and Thymine which are the symbolic parts of a DNA molecule. These are arranged in a sequence, just as are the symbols of human and computer languages. Some parts of the sequence describe how to make proteins, and some parts function like punctuation. Some parts haven't been deciphered yet; some have assumed that these are useless junk, but others are beginning to understand uses for the presumed 'junk DNA'.

Those who ascribe to the faith called Evolution have convinced themselves that all this complex machinery, which we have only begun to decipher, came into existence through random processes. They would like to believe that somehow information can arise out of randomness, but we who design computers know better.

Making information out of nothing is like the pseudoscience of perpetual-motion machines. These were proven to be impossible, because energy cannot be perfectly stored or transmitted. Always a little bit leaks out of the machine -- typically friction creating heat -- lost energy. In computer science and information theory, we know that likewise, information cannot be perfectly stored or transmitted. Always a little bit (or more) of error creeps in, and the data erodes. That's why hard drives have CRC (Cyclic Redundancy Check) codes to detect errors, and we backup our data and software with extra copies. That's why our bodies have redundant copies of the DNA.

Yes, we see DNA errors (genetic defects), and we see adaptive adjustments to the 'gene pool', but nobody has ever observed information being created out of nothing. Like any other information, Somebody created it. That's a subject that we will pursue further, later.

1 comment:

Anonymous said...
This comment has been removed by a blog administrator.