Thursday, March 18, 2010

The First Digitally-Controlled Designs

Since the discovery of DNA and RNA and the Genetic Code, it is indisputably clear to biologists that the structure and function of all living things is determined by the information stored in the DNA. The interpretation of the DNA information according to the Genetic Code creates a enormous set of specific proteins and other complex organic molecules that implement the structure and function of a particular organism. (See The Genetic Code - how to read the DNA record and More About the Genetic Code.) Some of these complex molecules are building blocks of the living structure; some are the tools or 'workmen' that build the structure; other organic molecules function more like supervisors that control when and where and how this work of construction is done. Still others supervise various functions of the living structure, such as digestion, breathing. sight, growth, etc. All of these complex functions are guided not exclusively by chemical laws, but also by the information from the DNA. (See Life is more than chemistry and Can Chemical Evolution Work?) This is true of all living things, whether a single-celled organism or a much larger plant or animal such as an oak tree or an elephant. Such a complex, coordinated interplay of material and function at multiple levels is clearly DESIGN.

I ought to explain that I understand and appreciate this from experience. I worked for 43 years as a designer and inventor of computers and other digital systems, acquiring 45 patents in that time; and in my retirement years, I have been studying organic chemistry. When I started my career, a typical computer was a roomful of refrigerator-sized cabinets. but less powerful than today's pocket calculator; and I have seen the technology grow functionally and shrink physically since then. In between then and now, the Quintrel computer that I designed, one of the first to do speech processing (like speech recognition) in real time (that is, as fast as you can talk) was the size of a cookie baking pan. Inside all GPS satellites, the computer system that controls all the signals is my design. So I know a design when I see one.

I especially appreciate the advantages of a digitally-controlled design over a design that is just digital. The old-fashioned mechanical adding machines did digital calculation, but the control was manual; that is, the operator had to select the sequence of operations as well as the input data. I remember the company used to have one with a typewriter-like shifting carriage so that it could do multiplication and division; but it was still manually controlled.

In human history, digitally controlled designs started with things like the 'player piano', where the keyboard was controlled by a roll of paper with punched holes to specify the sequence and timing of the notes, and the Jacquard loom, where punched holes caused threads to be raised or lowered to create intricate designs such as brocade and damask. Herman Hollerith adapted the punched cards of the weaving industry for data input for his Tabulating Machines, and Charles Babbage planned to use punched cards for his Analytical Engine, which began the age of computers. (See The Development of Information Processing.)

Let me tell a story that illustrates how "I especially appreciate the advantages of a digitally-controlled design" as I said earlier.

There was a period in my career when we designed digital devices for communication of digital messages. No calculation in the ordinary sense of the word was needed, but the digital logic needed to be 'smart'. For example, before sending a piece of a message (called a packet), an error-checking code needed to be generated and attached to the message, along with a packet number. When receiving a packet, the error-checking code needed to be checked to see if the packet had any errors. (Most errors were detectable.) If the packet had no errors, an 'ack' (acknowledgement) message was returned to the sender; but if errors were detected, a 'nak' (no-acknowledgement) message was returned. Both ack and nak messages included the number of the good or bad packet that had been checked. A nak message was a request to resend the packet (hoping to get it right on the next try), and an ack message told the sender that it no longer needed to keep a copy of the packet. A communication protocol like this was controlled by logic hardware similar to that used to construct a computer, but there was no computer and no software involved. The designs were digital, but not digitally-controlled as computers are controlled by software.

A major problem with this style of design was that if a design error needed to be corrected, or a new design feature added, new parts would need to be added, and the layout and wiring of the parts modified. The parts might not fit, so even the mechanical design might need to be redone.

An obvious solution to this problem is to include an 'embedded' computer in the design, so that software can define the functions of the design, because software is far more easily changed than the hardware. Once the software is thoroughly tested and no longer needs to be changed, it is typically embedded in read-only memory (ROM) and is called 'firmware'. This tactic is commonplace today, with embedded computers in automobiles and in nearly every electrical household appliance. That's easy today, because electronic circuits have shrunk enough for small computers, including all memory and other supporting logic, to be placed in one small, low-cost chip. But back then, electronics had shrunk only enough for simple circuits such a counter to fit in one chip. An embedded computer would require at least several chips.

We couldn't buy a general-purpose computer chip (they didn't exist then), but had to design a computer made of several chips. But this gave us the freedom to design a smaller 'custom' computer with only the functions actually needed. For example, we didn't need to add, subtract, multiply or divide; the only 'arithmetic' needed was to count the bits of a packet. Mostly, the computer needed to make decisions based on a specialized set of conditions. If such a design primarily controlled not a sequence of calculations, but a sequence of other operations (such as those needed for a communications protocol), it was usually called a 'controller' rather than a computer. Often, such a simplified computer / controller could be made with only a half-dozen parts. This 'custom' controller would thus have a 'custom' set of instructions that it could execute. (Each instruction is a group of binary codes and data that tell the computer / controller what to do for each step of its actions.)

Theoretically, a programmer (software writer) could write out the sequence of instructions (the software, or program) in the form of the ones and zeros that the hardware actually reads. But this would be very error-prone, because it is hard for people to memorize these codes, or even to copy them from a list without making mistakes. So, instead, equivalent codes that look more like English are invented, thus creating a special language that is much easier to learn and understand. Then a program called an 'assembler' is used to translate the semi-English to the ones and zeros that the hardware uses. (Also, decimal numbers are translated to binary numbers.)

Thus, almost every computer / controller design would have a different instruction set, and a correspondingly different 'assembly language', and a different assembler program. The assembler is what connected the software design to the hardware design.

Mostly, there were two kinds of designers: hardware logic designers that knew at least how to design parts of the computer hardware, and software designers that knew how to write software. A third kind of designer was a relative minority: the 'system designer', who understood both hardware and software -- the whole system, or the 'big picture'. (See The Start of System Engineering.) A few of these, who also knew the theory of formal languages, were able to write assembler programs, and even 'compilers', which can translate more abstract software languages. With my insatiable curiosity and willingness to self-educate myself in related fields on my own time, I became part of that minority.

The engineering supervisors resisted the idea of embedding computers in a design. Their reasoning was that we had hardware designers and software designers, but nobody that knew how to make a custom assembler. We would have to give such a job to outside specialists, which would be too expensive and troublesome.

It irked me that this judgement was hindering us from making compact and flexible designs. So, on my own time, I designed what I called the "General-Purpose Assembler". It was a step beyond a custom assembler, because before assembling a program, it first read a "language table", which defined the custom assembly language. So, the next time that a supervisor tried to veto a proposal for a design with an embedded computer / controller, I explained that I "happened to have" an assembler that could do the job. I did the extra work on my own time because I knew that digital control of a design was an optimum design paradigm.

I wrote an instruction manual for how to construct a "language table" and how to use the "General-Purpose Assembler", and soon other departments and projects were using it. A few years later, I estimated that about two dozen language tables had been written, creating that many custom assemblers for that many different embedded controllers. The "General-Purpose Assembler" also became a component of the assembler for the Quintrel processor that I mentioned earlier. These were all digitally-controlled designs.

Now, this story may seem like an utter digression from my initial discussion of DNA and RNA and the Genetic Code, but it was all to underscore and emphasize the following point:

I used to think of digitally-controlled designs as a modern phenomena -- but this is true only if you are limited to designs made by humans. But when I started to study organic chemistry and the workings of the Genetic Code, I soon realized that the greatest Engineer of all, God, got there first. For indeed, all living things of all kinds are digitally-controlled designs. The DNA is the read-only memory (ROM) that holds the genome, which is the software (firmware) that controls the chemistry that plays the role of 'hardware'. Each unit of DNA (nucleotide) is equivalent to two bits, having one of four values, and each codon (three DNA units) is equivalent to six bits, with one of 64 values. It compels one to ask "Where did all that DNA-software come from?" (See In The Beginning Was Information .) The reason why there is only one universal genetic code, and why so many life-forms share common design structures is not because all descended from a single common ancestor (unlikely if evolution is inevitable, as Richard Dawkins claims), but because all have a single Creator.

I know that some readers will dismiss my comparison of life designs to man-made designs as mere analogy. But my argument rests on more than analogy. It involves what in category theory is called isomorphisms. Rather than getting too technical, I will illustrate the principles involved by a simple example:

If two species have sufficient similarities (putting them in the same category), we can expect them to have similar locomotion. For example, cats and dogs both have four legs of nearly equal lengths, and the knees bend in the same directions; so we can expect them to walk and run in similar ways. Frogs, kangaroos, and apes also have four legs, but not all four of equal length, so the locomotion is different. There is greater similarity of function when there is greater similarity of structure.

With similar logic methods, we can show that DNA-controlled life forms are more similar to embedded controllers than personal computers. For example, in both, the completed design has no capability of loading new software (not true for PCs). In both computers and controllers, the same hardware with completely different software will have completely different functionality. In life, the same chemical laws, chemical resources (food, air, water, etc) and same genetic code with a completely different genome will have completely different functionality.

As an experienced designer, I not only know a design when I see one, I know a digitally-controlled design when I see one; and I appreciate that it is an optimum design paradigm. No wonder that people are using the term "Intelligent Design" to describe living things.

For more on this subject, see The Digital Control of Life.

Monday, March 08, 2010

God's Unilateral Agreement

The Bible is divided into two major parts called the Old Testament and the New Testament. "Testament" and "Covenant" are two English words that are used to translate the Hebrew "beriyth" and the Greek "diatheke" as used in the Bible. Both words are used to refer to a solemn or legally binding contract or treaty.

In the time of Abraham, a covenant between men was often solemnized by a ceremony whereby an animal was cut in half and both parties walked between the pieces of flesh, signifying "so let it be done to me if I do not keep this covenant". But when God made a covenant with Abraham (in Genesis 15:7-21) to give to his descendants the "Promised Land" (called "The Land of Israel" until the Romans renamed it "Palestine"), the ceremony was remarkably changed. After "a deep sleep fell upon Abram" (verse 12), "there appeared a smoking oven and a burning torch that passed between those pieces" (verse 17), signifying the presence of God certifying the contract. Since sleeping Abraham (then called Abram) did not also walk between the pieces, this signified that the covenant was unilateral -- God took full responsibility for keeping His promise to Abraham and his descendants.

However, God's covenant through Moses with His chosen people concerning the Law, repeated in the books of Exodus, Leviticus, Numbers, and Deuteronomy, was a bi-lateral covenant, because the people promised "All that the LORD has said we will do, and be obedient" (Exodus 24:7 and elsewhere), and because curses were promised if His people broke the covenant, and blessings promised if they kept it.

Central to the Old Testament covenants was the sacrifice of animals, signifying the debt of mankind toward God for his sin, which was only symbolically paid by the animal sacrifices. The most solemn of these sacrifices occurred each year at Passover, which foretold the true sacrifice, the actual payment, to come.

In the New Testament, we read of a day when Jesus celebrated a modified Passover ceremony with His disciples. It was modified because He ended the ceremony before the third cup, and because the ceremony was given new meaning while fulfilling the old meaning. Jesus Himself was the Passover Lamb that the cup signified; and hours later, He was sacrificed on a cross. Jesus gave the first cup (Luke 22:17) to His disciples, saying "Take this and divide it among yourselves", but didn't partake Himself, saying "I will not drink of the fruit of the vine until the kingdom of God comes." At the second cup (Luke 22:20), Jesus said "This cup is the new covenant in My blood, which is shed for you." Since then, Christians repeat an abbreviated form of that Passover ceremony that we now call Communion or The Lord's Supper.

The sacrifice of Christ on the cross fulfilled the old covenants and introduced a new covenant because His actual and effective sacrifice ended the need for symbolic sacrifices. The apostle Paul called it the 'covenant confirmed by God in Christ' (Galatians 3:17), and 'a better covenant' (Hebrews 7:22, 8:6, 9:15, and 12:24). Paul also explains that when the prophesy of Jeremiah (31:31-34) is fulfilled, this covenant will be embraced by a rejuvenated nation of Israel.

The new covenant is also unilateral, because Jesus Christ has paid the price in full, and we bring nothing. God says that "...all our righteousnesses are like filthy rags". (Isa 64:6) There are many Bible passages that make it clear that our righteous obedience of God's laws contributes nothing to the salvation that Christ freely offers to us. A few verses are:

Gal 2:16
...a man is not justified by the works of the law but by faith in Jesus Christ, even we have believed in Christ Jesus, that we might be justified by faith in Christ and not by the works of the law; for by the works of the law no flesh shall be justified.

Rom 4:4-5
4 Now to him who works, the wages are not counted as grace but as debt.
5 But to him who does not work but believes on Him who justifies the ungodly, his faith is accounted for righteousness

Rom 11:6
...if by grace, then it is no longer of works; otherwise grace is no longer grace...

Eph 2:8-9
8 For by grace you have been saved through faith, and that not of yourselves; it is the gift of God,
9 not of works, lest anyone should boast.

If our righteousness is insufficient, then how can we settle our debt of sin with God, and escape condemnation? We need to "declare bankruptcy", by confessing our sin and accepting the free gift of Christ's sacrifice, His payment for our sin:

1 John 1:9-10
9 If we confess our sins, He is faithful and just to forgive us our sins and to cleanse us from all unrighteousness.
10 If we say that we have not sinned, we make Him a liar, and His word is not in us.

John 3:16-19
16 For God so loved the world that He gave His only begotten Son, that whoever believes in Him should not perish but have everlasting life.
17 For God did not send His Son into the world to condemn the world, but that the world through Him might be saved.
18 He who believes in Him is not condemned; but he who does not believe is condemned already, because he has not believed in the name of the only begotten Son of God.

(The name "Jesus" means "Savior", so believing in His name means that you trust His ability to save you.) There is no other way:

Acts 4:12
Nor is there salvation in any other, for there is no other name under heaven given among men by which we must be saved.

John 14:6
Jesus said to him, "I am the way, the truth, and the life. No one comes to the Father except through Me."

It doesn't take a lot of faith; a genuine faith is sufficient to begin, and God will cause your faith to grow. A man once told Jesus "Lord, I believe", and then, doubting himself, added "help my unbelief". (Mark 9:24) Ephesians 2:8, quoted above, indicates that even faith is a gift of God.

Rather than righteousness saving us, it is God's saving of us that leads to righteousness, because God's Spirit works in us to change us, and God's love motivates us to please Him:

Titus 3:5
not by works of righteousness which we have done, but according to His mercy He saved us, through the washing of regeneration and renewing of the Holy Spirit

Eph 2:10
For we are His workmanship, created in Christ Jesus for good works, which God prepared beforehand that we should walk in them.

Phil 2:13
for it is God who works in you both to will and to do for His good pleasure.

There are also many verses that indicate that 'works' that result from God's work of renewal in us, demonstrate to others that we truly know God, such as:

Titus 1:16
They profess to know God, but in works they deny Him

(All verses quoted from the New King James version)

So, we start by confessing our sins, which implies a desire to stop sinning; but God, while He helps us to stop sinning, does not make our success at not sinning part of His covenant. He knows we are unable to keep such a requirement. Our righteousness, however much it was, was insufficient in the first place, and it makes no sense to add it afterward. Any righteousness we achieve afterward is by availing ourselves of His help, so how can we claim any credit for that?

It is truly comforting to know that our right standing with God rests securely on His unilateral agreement and promise to us, motivated by His unconditional love for us.

When in trouble, we may reach up as a child to grasp His hand; but His hand is too big for us. Instead, He reaches down and holds us -- and that is far more secure.

For more on trusting God, click here.

Saturday, March 06, 2010

More About the Genetic Code

Sometimes I will go back to one of my blog articles to correct minor errors; and a few times I have made major additions. But a disadvantage of this is that people that have already read the original article will probably not go back to read it again.

A little more than a year ago, I wrote The Genetic Code - how to read the DNA record, and recently added some details to a paragraph and expanded the conclusion of the article. So here is the amended paragraph and the expanded conclusion.

The original article gave the impression that only the transfer RNA (tRNA) molecules define the genetic code. Actually, other, larger, molecules are also involved. So the amended paragraph clarifies this:

The key elements of translation are small transfer RNA (tRNA) molecules. Each kind of tRNA molecule has a region called the anticodon that can recognize and attach to a particular codon of a messenger RNA (mRNA) molecule. The tRNA molecule has another region called the "3' terminal" that attaches to a particular amino acid. This attachment is aided by molecules called aminoacyl-tRNA synthetases, of which there is generally one kind for each kind of amino acid. There are even helper molecules that provide a proofreading function to detect and correct any translation errors.

(Actually, there are some variations of this, but discussing these would be distracting. There are also many other types of complex molecules that control the code-translation process but do not define the genetic code -- another subject.)

Then I expanded the conclusion:

Where does the genetic code come from? It is not the result of chemistry or any laws of physics. It is determined by the set of tRNA molecule types, and aminoacyl-tRNA synthetase types, which are constructed according to DNA information, which encodes not only the building materials and the building plans, but also the building tools and the building methods. In other words, the genetic code is just information that has always been there since life began.

The number of possible genetic codes is a huge number, 85 digits long:

1,510,109,515,792,918,244,116,781,339,315,785,081,841,294, 607,960,614,956,302,330,123,544,242,628,820,336,640,000

and all of these many codes would work equally well. But all of life uses just one genetic code, about 280 bits of information, the one that scientists Watson and Crick discovered in 1953, but was there since creation. The theory of evolution has no explanation for how the genetic code began, because it can't explain how information can arise from no information. Nor can it explain why there is only one genetic code (out of such a huge number of equally workable codes), even though there is extreme variation of everything else. The mechanism of the present genetic code is very complex; and evolutionary theory supposes that it randomly evolved from a simpler, smaller code. But because there are so many equally viable genetic codes, random evolution should have produced species with many different codes. The evolutionary explanation is far more unlikely than dumping a bucketful of dice on the floor and expecting them to all land with the same number up.

The creationist explanation is that the universal genetic code is like a signature of the creator, who chose a uniform code for all of the designs of life. A short story will illustrate the principle:

During the Cold War, Russia was suspected of stealing American technology. Proof came when some Russian war equipment given to a third country was captured and examined. It contained an integrated circuit that was identical to an American design. It is theoretically possible that the Russians had the same design concept, leading to a similar design. But digital circuits have thousands of component parts connected by thousands of wires. There trillions of ways to position the parts on the chip and trillions of ways to route the connecting wires that work equally well. It would be impossible for the Russians to independently produce the same positions and routings even if the logical design were identical. But examination showed the details were identical, even details left over from correcting wiring errors. In effect, there was an American 'signature' in the copied design.

For the Math fans, I'll add a footnote on how that 85-digit number was calculated:

That big number counts the number of ways that the 64 codons can be mapped to 21 interpretations, or interpreted as 21 'messages'. One message is to start with a Methionine (or add a Methionine if already started); one is to stop, and the other 19 messages are to add one of the other 19 amino acids [to the peptide chain that will fold into a protein molecule]. This 64-to-21 mapping can be enumerated in two steps:

First, we count the number of partitions of a set of 64 items into 21 non-empty, pair-wise disjoint subsets. In plain language, this means that:
  • Together, the 21 subsets must contain all of the 64 codons.
  • Each codon must be assigned to only one subset.
  • None of the subsets can be empty; each must contain at least one codon.
This count is calculated by a mathematical function called the Sterling number of the second kind, which is S(64, 21) in this case.

Second, we need to count the number of ways that the 21 subsets can be mapped to the 21 messages. This the number of permutations of 21 things, which is 21 factorial, written 21!

So the desired number is S(64, 21) times 21! But typical computer hardware cannot directly compute numbers that large. Special software that partitions a big number into slices small enough for the hardware is needed. When I was designing special hardware for very large integers (for public key cryptography; I have two patents, #4,658,094 and #5,289,397, for that), I wrote such software so that I could test and verify my designs. So I used my 'BigInt' software to do the arithmetic.