Sunday, September 05, 2010

Can Artificial Intelligence be Evolved?

Now and then we see a claim that an evolutionary program has advanced the pursuit of artificial intelligence (AI). Because the degree of 'intelligence' is invariably minuscule compared to advances in AI using non-evolutionary methods, the report will typically use words such as "..evolved to produce basic intelligence" and "it is hoped that the discovery may in future.." That is, even though the 'intelligence' could be demonstrated by a pre-schooler, there is great hope of super-human intelligence down the road.

Usually the reports lack the detail required for critical review. Partly, this is justified because there is so much detail involved that it is not practical to publish everything. But usually there is not even complete disclosure at a functional level.

For example, suppose it is claimed that the 'artificial life-form' developed the use of memory. It could be that the simulated system was simply given the opportunity to do a task with or without memory, and it found that using memory led to greater success. Well, you don't need an evolutionary algorithm to do that. But without disclosing a functional description, the reader can be left with the impression that a memory mechanism was 'evolved'.

For many readers, AI is a great mystery, and the reader has no way to judge whether such reports are overly optimistic or not. So here I will try to remove much of the mystery by providing an overview without getting too deeply into the math and logic.

An Overview of AI

AI is a broad field of study, and not all researchers or developers have the same goals, nor use the same methods.

The process of intelligent thinking is generally described as having two parts: analysis (taking apart) and synthesis (putting together). Closely related to these, the terms induction (logically proceeding from the specific to the generic) and deduction (from generic to specific) are also used. Some AI efforts focus on analysis, some on synthesis, and some on both.

For example, one project focused on using highly abstract formal language to build a data base of 'expert knowledge' garnered from doctors (who did most of the analysis) to synthesize an 'artificial expert' to diagnose diseases, thus simulating a team of medical experts.

Fields of practical science generally have two sub-fields of endeavor: research and development. Research seeks to discover new principles and methods, and development seeks to find effective ways to use the new principles and methods to accomplish practical purposes. Some AI efforts focus on research, some on development, and some on both.

There is a wide variety of methods used to try to create artificial intelligence. Usually an AI project focuses on one method, but sometimes methods are combined. Some methods are attempts to mimic natural patterns or structures.

The 'evolutionary' (selective adaptation) algorithms are in this category. Some model biological selective adaptation closely, and some more loosely, using the 'evolution' concept more as inspiration. It seems to depend on the motive. The motive may be theoretical -- to prove evolution -- and they may talk of "intelligent agents". Or the motive may be practical -- to provide better computing -- and they may talk of "intelligent machines" instead.

Another AI method that mimics nature is neural networks. I remember that the early research in this area focused closely on modelling the operation of actual neurons, trying to understand how they worked. Some used software models, and others built circuits that mimicked neurons. But these early models were very complex, so they chose simpler models so that they could build larger networks.

Other AI methods seek to borrow and adapt the mental methods that people use to reason and solve problems. These AI systems are primarily rule-based -- instead of just handling data that represent facts, they use lists of rules, including rules for choosing rules, or making new rules from other rules, etc. They use category theory, means-end analysis, and planning strategies to try to construct a logical network connecting known facts to a target question. These rule-based systems depend heavily on very abstract formal languages to describe relationships, categories, and attributes of objects.

As an engineer and programmer who has seen up close the development of computing from the days when transistors were first used, I see the rule-based AI methods as a natural extension of the development of computing.

For example, suppose the solution of a problem requires us to determine the length H of the hypotenuse (longest side) of a right triangle when we know the lengths A and B of the shorter sides. To find the answer for a particular case, all we need to do is arithmetic. (I'm including finding square roots as arithmetic.) A machine that can do arithmetic for us is called a calculator.
But to express how to solve all such problems, we use an algebraic expression to say that H is the square root of the sum of A squared and B squared. We have gone to a higher level of abstraction -- from describing the solution of one problem to describing the solution of a class of similar problems. A machine that can do arithmetic for us, guided by an algebraic expression (a formula) is called a programmable calculator.

Now suppose that we need to know how to compute length B when we know length H and length A. This requires a different algebraic expression, which can be derived from the expression that we described earlier. A programmer that knows algebra can manipulate the first expression to derive the second expression, then write another program to solve this new kind of problem. But suppose that we require that the computer should do this algebraic manipulation? This is a different matter. Instead of merely writing software that can interpret an algebraic expression to do the correct arithmetic procedure, the programmer must write software that "knows how to do algebra", that is, to manipulate algebraic expressions. Now we have stepped up to an even higher level of abstraction.

Years ago, I bought a program called MathCad (from Mathsoft) that "knows how to do algebra" -- and calculus, statistics, matrix algebra, graphs, and many other mathematical techniques. It is so good at this that the program taught me math that I hadn't learned in college. There is a similar program named Mathematica produced by Wolfram Research, which is more powerful (and expensive).

Now, Wolfram Research is developing an even 'smarter' program, making the current "Wolfram Alpha" available on the Internet. It has access to a wide variety of scientific data. For example, you can enter "amino acids" and it will list the 20 kinds. Enter "weights of amino acids", and it will assume you meant atomic weights, and tell you the highest, lowest and median values. Better yet, it knows how to interpret these facts. Enter "distance from Venus to Mars", and it will consult its data about the planetary system, and report that right NOW, the distance is 148.7 million miles (and in other units) and that it takes 13 minutes for light to travel that distance in empty space. It's an even higher level of abstraction, without evolution, just more abstract rules.


In summary, the rule-based style of AI has been far more successful in a practical way (accomplishing smarter computing than ever before) than the neural networks and evolutionary algorithms. The neural and evolutionary strategies are pursued not for near-term practical benefit, but on theoretical grounds.

The neural networks are pursued to try to demonstrate that brain-like structures can produce artificial 'thought', in contrast to philosophers who see the brain as not the producer of thought, but more like the soul's keyboard. After decades of research, progress has been painstakingly slow, and results very limited.

The evolutionary strategies are pursued to try to demonstrate that evolution can produce design. But so far, the results only demonstrate what selective adaptation does in the biological world -- namely, to adjust and adapt a design within the confines of the resources already provided within the design.

To designers, such as myself, the reason why the rule-based systems are far more successful is obvious: they are compatible with the top-down principles of design, which works from well-defined purposes toward increasingly more-detailed design. The other methods attempt to achieve design bottom-up, starting with the details and working toward a goal that is not defined, with no strategy as to how to get there. It's implicitly based on the myth that randomness magically produces information, or on the concept of a 'learning machine'. When a design IS 'found', AND the researchers allow you to look at their software, it becomes evident that the result was actually designed into the software. When you hide Easter eggs and search randomly, you might actually find Easter eggs.

But 'learning machines' are inherently complex, and must themselves be designed. Also a 'learning machine' is just an optimizer that finds the 'best' within some domain that is limited by the design. And the principle of irreducible complexity is a huge hurdle that blocks the bottom-up approach. When probabilities are computed for achieving complex designs by random methods, they invariably turn out to be practically zero.

What is "practically zero"? I will define it as 1 divided by a very large number. So what is "a very large number"? In the physical world, it is hard to get numbers larger than about 100 digits. For example, if you estimate the ratio of the mass of the observable universe to the mass of the electron, you get only an 84-digit number. But when you compute the probability of getting some irreducibly complex design by a random method, and express it as 1 divided by X, then X is typically thousands of digits long.

That generally means that the universe doesn't have enough material and enough time for the random experiment to succeed. That's practically zero.

Saturday, June 26, 2010

The Digital Control of Life

In other blog articles such as Life is More Than Chemistry and Can Chemical Evolution Work? I point out how living things fundamentally differ from nonliving things. Both are controlled by the laws of chemistry, but in living things, the chemistry is guided by information from the DNA data source. That explains why organic molecules are generally much larger than inorganic molecules. To make such large molecules, the limitations of pure chemistry are overcome by 'helper' molecules such as chaperone molecules made according to the DNA design plan. If the DNA data source is cut off, the organic molecules decompose as the laws of pure chemistry take over.

In a recent blog article, The First Digitally Controlled Designs, I point out that each living organism is a digitally-controlled design, using the same design paradigm now commonly used in most household appliances, where an embedded controller uses symbolic digital codes (software) to control the functions of the appliance. Because the 'software' in these cases is stored in read-only memory (ROM), it is technically called 'firmware'.

The DNA is also firmware, because:

(1) It is digital: the digits are Adenine, Cytosine, Thymine, and Guanine, equivalent to a 2-bit code. The fact that genetic control uses 4-valued digits, and man-made controllers use 2-valued digits (bits) is a mere design detail.

(2) It is symbolic: Each codon, a sequence of three DNA digits, equivalent to a 6-bit code, is NOT an amino acid, but a symbol that represents an amino acid (or in one case, a stop signal). The fact that genetic control uses 6-bit codons, and man-made controllers use 8-bit bytes is a mere design detail.

(3) It is stored in read-only memory. There is no information flow from polypeptides to mRNA to DNA, or any writing process.

(4) In the reading process, selected information from the DNA is copied to mRNA (temporary copies) and then interpreted: that is, translated to polypeptides (the basic form of proteins). In man-made digital controllers, selected information from the read-only memory is copied to temporary memory and then interpreted: that is, translated to signals that produce desired actions.

In addition to the temporary (mRNA) copying, in cells there are two other copying processes. There is a copying process that occurs during cell mitosis for growth and repair, and a rearrangement/copying process that occurs during cell meiosis for sexual reproduction. Neither process creates new information. Man-made digital controllers are not designed to grow and reproduce by themselves, so similar copying is not provided. Instead, there is copying in the manufacturing process.

(5) There is a higher structure typical of digital control languages. These specialized languages have data units that operate somewhat like the verbs, nouns, and modifiers of 'natural' (human) languages. Some, like a noun, specify an object or subject; some, like a verb, specify an action; and others (modifiers) specify a condition or selection or limitation, etc. The DNA information is used not only to create the basic structures (nouns) of life, but also specialized molecules (modifiers) that control the operations (verbs) of these structures.

If you want to appreciate the complexity of life designs at the cellular level, consider the process of extracting energy from food molecules like glucose.  Simply put, the process is a "controlled burn" of the food-fuel, producing energy, carbon dioxide and water.  The released energy is transported by ATP molecules (like rechargeable batteries) to the sites of all the energy-consuming activities of the cell.

This process, called Cellular Respiration, involves 4 stages:

  • Glycolysis (10 steps), 
  • the Citric Acid Cycle (8 stages), 
  • the Kreb's Cycle (8 stages), and 
  • the Electron Transport Chain (4 steps).  

  •  If you click on each of the above links, you will see what organic chemists call a "simplified" or "summary" diagram of each part of the process.  Unless you are an organic chemist or a student of organic chemistry, you will not understand these diagrams, but one glance will give you a good idea of the level of complexity of so-called 'primitive' life.  These diagrams represent only some of the cell processes, and they are only summaries!  There are diagrams for other complex processes, such as Photosynthesis, which captures the energy of sunlight and stores it by making glucose (food-fuel).

    The process of reading and interpreting the DNA information creates all the chemical 'machinery' (such as enzymes) and chemical 'factories' (such as mitochondria) for these and many other complex processes of living things.

    Wednesday, May 05, 2010

    Update on Dave McKean's 'Luna' Film

    Here is an update on Dave McKean's upcoming film Luna for which I folded two origami crabs in 2007. If you haven't read my previous blogs about this, they are Origami Emergency and More About the Origami Crabs.

    First, some additional details about the crabs.

    Luna filming began in early November of 2007. The request for the origami crabs was sent on 10-30-07 and the crabs arrived on the set 11-08-07.

    Through my friend Mark Kennedy and Nick Robinson, word reached Dennis Walker, the articles editor of the British Origami Society (BOS), who asked me for permission to put my blog article in the BOS magazine. Dennis also told me that he was "VERY jealous" and "pretty chuffed that it was through the Origami Database". I think he figured that a British filmmaker should have asked the British Origami Society first.

    After completing the live action shooting, and starting some editing, financing for the film collapsed at the end of 2007. About two years later, new financing allowed post production of Luna to resume in March of 2010.

    Twitter Info

    I extracted the following information from Dave McKean's Twitter page:

    "Answering request for Luna stills, here's a few, from the live action shoot only. As we progress I'll post more: "

    A "90% version of Luna" has been shown to producers. "Four people have now seen my film all the way through." "... the crab performed beautifully."

    "Several small animated scenes + fx, music, sound etc." need to be done. Many 'small' details, but "a long process". Anticipate completion by the end of 2010. Listing in the Internet Movie Data Base (IMDB) by July 2010, perhaps.

    "It's nothing like MirrorMask to be honest, although it does have Stephanie Leonidas in it, and some dreamlike scenes. It's an adult drama."

    While looking for details on Luna progress, I came across this delightful bit of banter which I'll include for your enjoyment:
    Ken Fries: Steve probably already has a copy of Luna...

    Dave McKean: Great! Can I see it? Then I'll know if it's worth finishing...

    Ken Fries: Nah, u shouldn't see it, I don't want to spoil the ending for you.


    Dave McKean sent me the following still shots from the film, in addition to the above photo of the crab "that will be in the book of the film." "The stills show Grant (Ben Daniels) folding the crab, with his wife Christine (Dervla Kirwan), which he symbolically buries in the sand. They hide behind a rock and watch a real crab emerge from the same place."

    Folding the crab, wide shot and close-up:

    The crab, on hand:

    The crab burial:

    Some contributors to the Luna film:

    • Dave McKean (writer, director, designer, editor)

    • Keith Griffiths, producer (produced 78 films, directed 16 films)

    • Simon Moorhead (produced all Mckean's films, including MirrorMask)

    • Antony Shearn (director of photography)

    • Tessa Beazley, production manager (production manager for about a dozen films, and other production)

    • Darkside Animation of London, animation support (graphics and special effects for 3 films)

    • Ashley Slater, music (actor, music writer, performer, and producer; music producer, mixer, programmer, and performer for "MirrorMask" soundtrack)

    • Iain Ballamy (jazz player and composer, composed the score for MirrorMask)

    • Dervla Kirwan, actress (Ballykissangel, Casanova, Dr. Who, Ondine)

    • Stephanie Leonidas, actress (MirrorMask, Yes, Feast of the Goat, Crusade in Jeans, Dracula)

    • Michael Maloney (In the Bleak Midwinter, Babel, Notes on a Scandal, Truly Madly Deeply)

    • Ben Daniels (Spooks, Doom, The State Within, Fogbound, I Want You)

    • Maurice RoĆ«ves (The Damned United, Hallam Foe, Tutti Frutti, Beautiful Creatures)

    Thursday, March 18, 2010

    The First Digitally-Controlled Designs

    Since the discovery of DNA and RNA and the Genetic Code, it is indisputably clear to biologists that the structure and function of all living things is determined by the information stored in the DNA. The interpretation of the DNA information according to the Genetic Code creates a enormous set of specific proteins and other complex organic molecules that implement the structure and function of a particular organism. (See The Genetic Code - how to read the DNA record and More About the Genetic Code.) Some of these complex molecules are building blocks of the living structure; some are the tools or 'workmen' that build the structure; other organic molecules function more like supervisors that control when and where and how this work of construction is done. Still others supervise various functions of the living structure, such as digestion, breathing. sight, growth, etc. All of these complex functions are guided not exclusively by chemical laws, but also by the information from the DNA. (See Life is more than chemistry and Can Chemical Evolution Work?) This is true of all living things, whether a single-celled organism or a much larger plant or animal such as an oak tree or an elephant. Such a complex, coordinated interplay of material and function at multiple levels is clearly DESIGN.

    I ought to explain that I understand and appreciate this from experience. I worked for 43 years as a designer and inventor of computers and other digital systems, acquiring 45 patents in that time; and in my retirement years, I have been studying organic chemistry. When I started my career, a typical computer was a roomful of refrigerator-sized cabinets. but less powerful than today's pocket calculator; and I have seen the technology grow functionally and shrink physically since then. In between then and now, the Quintrel computer that I designed, one of the first to do speech processing (like speech recognition) in real time (that is, as fast as you can talk) was the size of a cookie baking pan. Inside all GPS satellites, the computer system that controls all the signals is my design. So I know a design when I see one.

    I especially appreciate the advantages of a digitally-controlled design over a design that is just digital. The old-fashioned mechanical adding machines did digital calculation, but the control was manual; that is, the operator had to select the sequence of operations as well as the input data. I remember the company used to have one with a typewriter-like shifting carriage so that it could do multiplication and division; but it was still manually controlled.

    In human history, digitally controlled designs started with things like the 'player piano', where the keyboard was controlled by a roll of paper with punched holes to specify the sequence and timing of the notes, and the Jacquard loom, where punched holes caused threads to be raised or lowered to create intricate designs such as brocade and damask. Herman Hollerith adapted the punched cards of the weaving industry for data input for his Tabulating Machines, and Charles Babbage planned to use punched cards for his Analytical Engine, which began the age of computers. (See The Development of Information Processing.)

    Let me tell a story that illustrates how "I especially appreciate the advantages of a digitally-controlled design" as I said earlier.

    There was a period in my career when we designed digital devices for communication of digital messages. No calculation in the ordinary sense of the word was needed, but the digital logic needed to be 'smart'. For example, before sending a piece of a message (called a packet), an error-checking code needed to be generated and attached to the message, along with a packet number. When receiving a packet, the error-checking code needed to be checked to see if the packet had any errors. (Most errors were detectable.) If the packet had no errors, an 'ack' (acknowledgement) message was returned to the sender; but if errors were detected, a 'nak' (no-acknowledgement) message was returned. Both ack and nak messages included the number of the good or bad packet that had been checked. A nak message was a request to resend the packet (hoping to get it right on the next try), and an ack message told the sender that it no longer needed to keep a copy of the packet. A communication protocol like this was controlled by logic hardware similar to that used to construct a computer, but there was no computer and no software involved. The designs were digital, but not digitally-controlled as computers are controlled by software.

    A major problem with this style of design was that if a design error needed to be corrected, or a new design feature added, new parts would need to be added, and the layout and wiring of the parts modified. The parts might not fit, so even the mechanical design might need to be redone.

    An obvious solution to this problem is to include an 'embedded' computer in the design, so that software can define the functions of the design, because software is far more easily changed than the hardware. Once the software is thoroughly tested and no longer needs to be changed, it is typically embedded in read-only memory (ROM) and is called 'firmware'. This tactic is commonplace today, with embedded computers in automobiles and in nearly every electrical household appliance. That's easy today, because electronic circuits have shrunk enough for small computers, including all memory and other supporting logic, to be placed in one small, low-cost chip. But back then, electronics had shrunk only enough for simple circuits such a counter to fit in one chip. An embedded computer would require at least several chips.

    We couldn't buy a general-purpose computer chip (they didn't exist then), but had to design a computer made of several chips. But this gave us the freedom to design a smaller 'custom' computer with only the functions actually needed. For example, we didn't need to add, subtract, multiply or divide; the only 'arithmetic' needed was to count the bits of a packet. Mostly, the computer needed to make decisions based on a specialized set of conditions. If such a design primarily controlled not a sequence of calculations, but a sequence of other operations (such as those needed for a communications protocol), it was usually called a 'controller' rather than a computer. Often, such a simplified computer / controller could be made with only a half-dozen parts. This 'custom' controller would thus have a 'custom' set of instructions that it could execute. (Each instruction is a group of binary codes and data that tell the computer / controller what to do for each step of its actions.)

    Theoretically, a programmer (software writer) could write out the sequence of instructions (the software, or program) in the form of the ones and zeros that the hardware actually reads. But this would be very error-prone, because it is hard for people to memorize these codes, or even to copy them from a list without making mistakes. So, instead, equivalent codes that look more like English are invented, thus creating a special language that is much easier to learn and understand. Then a program called an 'assembler' is used to translate the semi-English to the ones and zeros that the hardware uses. (Also, decimal numbers are translated to binary numbers.)

    Thus, almost every computer / controller design would have a different instruction set, and a correspondingly different 'assembly language', and a different assembler program. The assembler is what connected the software design to the hardware design.

    Mostly, there were two kinds of designers: hardware logic designers that knew at least how to design parts of the computer hardware, and software designers that knew how to write software. A third kind of designer was a relative minority: the 'system designer', who understood both hardware and software -- the whole system, or the 'big picture'. (See The Start of System Engineering.) A few of these, who also knew the theory of formal languages, were able to write assembler programs, and even 'compilers', which can translate more abstract software languages. With my insatiable curiosity and willingness to self-educate myself in related fields on my own time, I became part of that minority.

    The engineering supervisors resisted the idea of embedding computers in a design. Their reasoning was that we had hardware designers and software designers, but nobody that knew how to make a custom assembler. We would have to give such a job to outside specialists, which would be too expensive and troublesome.

    It irked me that this judgement was hindering us from making compact and flexible designs. So, on my own time, I designed what I called the "General-Purpose Assembler". It was a step beyond a custom assembler, because before assembling a program, it first read a "language table", which defined the custom assembly language. So, the next time that a supervisor tried to veto a proposal for a design with an embedded computer / controller, I explained that I "happened to have" an assembler that could do the job. I did the extra work on my own time because I knew that digital control of a design was an optimum design paradigm.

    I wrote an instruction manual for how to construct a "language table" and how to use the "General-Purpose Assembler", and soon other departments and projects were using it. A few years later, I estimated that about two dozen language tables had been written, creating that many custom assemblers for that many different embedded controllers. The "General-Purpose Assembler" also became a component of the assembler for the Quintrel processor that I mentioned earlier. These were all digitally-controlled designs.

    Now, this story may seem like an utter digression from my initial discussion of DNA and RNA and the Genetic Code, but it was all to underscore and emphasize the following point:

    I used to think of digitally-controlled designs as a modern phenomena -- but this is true only if you are limited to designs made by humans. But when I started to study organic chemistry and the workings of the Genetic Code, I soon realized that the greatest Engineer of all, God, got there first. For indeed, all living things of all kinds are digitally-controlled designs. The DNA is the read-only memory (ROM) that holds the genome, which is the software (firmware) that controls the chemistry that plays the role of 'hardware'. Each unit of DNA (nucleotide) is equivalent to two bits, having one of four values, and each codon (three DNA units) is equivalent to six bits, with one of 64 values. It compels one to ask "Where did all that DNA-software come from?" (See In The Beginning Was Information .) The reason why there is only one universal genetic code, and why so many life-forms share common design structures is not because all descended from a single common ancestor (unlikely if evolution is inevitable, as Richard Dawkins claims), but because all have a single Creator.

    I know that some readers will dismiss my comparison of life designs to man-made designs as mere analogy. But my argument rests on more than analogy. It involves what in category theory is called isomorphisms. Rather than getting too technical, I will illustrate the principles involved by a simple example:

    If two species have sufficient similarities (putting them in the same category), we can expect them to have similar locomotion. For example, cats and dogs both have four legs of nearly equal lengths, and the knees bend in the same directions; so we can expect them to walk and run in similar ways. Frogs, kangaroos, and apes also have four legs, but not all four of equal length, so the locomotion is different. There is greater similarity of function when there is greater similarity of structure.

    With similar logic methods, we can show that DNA-controlled life forms are more similar to embedded controllers than personal computers. For example, in both, the completed design has no capability of loading new software (not true for PCs). In both computers and controllers, the same hardware with completely different software will have completely different functionality. In life, the same chemical laws, chemical resources (food, air, water, etc) and same genetic code with a completely different genome will have completely different functionality.

    As an experienced designer, I not only know a design when I see one, I know a digitally-controlled design when I see one; and I appreciate that it is an optimum design paradigm. No wonder that people are using the term "Intelligent Design" to describe living things.

    For more on this subject, see The Digital Control of Life.

    Monday, March 08, 2010

    God's Unilateral Agreement

    The Bible is divided into two major parts called the Old Testament and the New Testament. "Testament" and "Covenant" are two English words that are used to translate the Hebrew "beriyth" and the Greek "diatheke" as used in the Bible. Both words are used to refer to a solemn or legally binding contract or treaty.

    In the time of Abraham, a covenant between men was often solemnized by a ceremony whereby an animal was cut in half and both parties walked between the pieces of flesh, signifying "so let it be done to me if I do not keep this covenant". But when God made a covenant with Abraham (in Genesis 15:7-21) to give to his descendants the "Promised Land" (called "The Land of Israel" until the Romans renamed it "Palestine"), the ceremony was remarkably changed. After "a deep sleep fell upon Abram" (verse 12), "there appeared a smoking oven and a burning torch that passed between those pieces" (verse 17), signifying the presence of God certifying the contract. Since sleeping Abraham (then called Abram) did not also walk between the pieces, this signified that the covenant was unilateral -- God took full responsibility for keeping His promise to Abraham and his descendants.

    However, God's covenant through Moses with His chosen people concerning the Law, repeated in the books of Exodus, Leviticus, Numbers, and Deuteronomy, was a bi-lateral covenant, because the people promised "All that the LORD has said we will do, and be obedient" (Exodus 24:7 and elsewhere), and because curses were promised if His people broke the covenant, and blessings promised if they kept it.

    Central to the Old Testament covenants was the sacrifice of animals, signifying the debt of mankind toward God for his sin, which was only symbolically paid by the animal sacrifices. The most solemn of these sacrifices occurred each year at Passover, which foretold the true sacrifice, the actual payment, to come.

    In the New Testament, we read of a day when Jesus celebrated a modified Passover ceremony with His disciples. It was modified because He ended the ceremony before the third cup, and because the ceremony was given new meaning while fulfilling the old meaning. Jesus Himself was the Passover Lamb that the cup signified; and hours later, He was sacrificed on a cross. Jesus gave the first cup (Luke 22:17) to His disciples, saying "Take this and divide it among yourselves", but didn't partake Himself, saying "I will not drink of the fruit of the vine until the kingdom of God comes." At the second cup (Luke 22:20), Jesus said "This cup is the new covenant in My blood, which is shed for you." Since then, Christians repeat an abbreviated form of that Passover ceremony that we now call Communion or The Lord's Supper.

    The sacrifice of Christ on the cross fulfilled the old covenants and introduced a new covenant because His actual and effective sacrifice ended the need for symbolic sacrifices. The apostle Paul called it the 'covenant confirmed by God in Christ' (Galatians 3:17), and 'a better covenant' (Hebrews 7:22, 8:6, 9:15, and 12:24). Paul also explains that when the prophesy of Jeremiah (31:31-34) is fulfilled, this covenant will be embraced by a rejuvenated nation of Israel.

    The new covenant is also unilateral, because Jesus Christ has paid the price in full, and we bring nothing. God says that "...all our righteousnesses are like filthy rags". (Isa 64:6) There are many Bible passages that make it clear that our righteous obedience of God's laws contributes nothing to the salvation that Christ freely offers to us. A few verses are:

    Gal 2:16
    ...a man is not justified by the works of the law but by faith in Jesus Christ, even we have believed in Christ Jesus, that we might be justified by faith in Christ and not by the works of the law; for by the works of the law no flesh shall be justified.

    Rom 4:4-5
    4 Now to him who works, the wages are not counted as grace but as debt.
    5 But to him who does not work but believes on Him who justifies the ungodly, his faith is accounted for righteousness

    Rom 11:6
    ...if by grace, then it is no longer of works; otherwise grace is no longer grace...

    Eph 2:8-9
    8 For by grace you have been saved through faith, and that not of yourselves; it is the gift of God,
    9 not of works, lest anyone should boast.

    If our righteousness is insufficient, then how can we settle our debt of sin with God, and escape condemnation? We need to "declare bankruptcy", by confessing our sin and accepting the free gift of Christ's sacrifice, His payment for our sin:

    1 John 1:9-10
    9 If we confess our sins, He is faithful and just to forgive us our sins and to cleanse us from all unrighteousness.
    10 If we say that we have not sinned, we make Him a liar, and His word is not in us.

    John 3:16-19
    16 For God so loved the world that He gave His only begotten Son, that whoever believes in Him should not perish but have everlasting life.
    17 For God did not send His Son into the world to condemn the world, but that the world through Him might be saved.
    18 He who believes in Him is not condemned; but he who does not believe is condemned already, because he has not believed in the name of the only begotten Son of God.

    (The name "Jesus" means "Savior", so believing in His name means that you trust His ability to save you.) There is no other way:

    Acts 4:12
    Nor is there salvation in any other, for there is no other name under heaven given among men by which we must be saved.

    John 14:6
    Jesus said to him, "I am the way, the truth, and the life. No one comes to the Father except through Me."

    It doesn't take a lot of faith; a genuine faith is sufficient to begin, and God will cause your faith to grow. A man once told Jesus "Lord, I believe", and then, doubting himself, added "help my unbelief". (Mark 9:24) Ephesians 2:8, quoted above, indicates that even faith is a gift of God.

    Rather than righteousness saving us, it is God's saving of us that leads to righteousness, because God's Spirit works in us to change us, and God's love motivates us to please Him:

    Titus 3:5
    not by works of righteousness which we have done, but according to His mercy He saved us, through the washing of regeneration and renewing of the Holy Spirit

    Eph 2:10
    For we are His workmanship, created in Christ Jesus for good works, which God prepared beforehand that we should walk in them.

    Phil 2:13
    for it is God who works in you both to will and to do for His good pleasure.

    There are also many verses that indicate that 'works' that result from God's work of renewal in us, demonstrate to others that we truly know God, such as:

    Titus 1:16
    They profess to know God, but in works they deny Him

    (All verses quoted from the New King James version)

    So, we start by confessing our sins, which implies a desire to stop sinning; but God, while He helps us to stop sinning, does not make our success at not sinning part of His covenant. He knows we are unable to keep such a requirement. Our righteousness, however much it was, was insufficient in the first place, and it makes no sense to add it afterward. Any righteousness we achieve afterward is by availing ourselves of His help, so how can we claim any credit for that?

    It is truly comforting to know that our right standing with God rests securely on His unilateral agreement and promise to us, motivated by His unconditional love for us.

    When in trouble, we may reach up as a child to grasp His hand; but His hand is too big for us. Instead, He reaches down and holds us -- and that is far more secure.

    For more on trusting God, click here.

    Saturday, March 06, 2010

    More About the Genetic Code

    Sometimes I will go back to one of my blog articles to correct minor errors; and a few times I have made major additions. But a disadvantage of this is that people that have already read the original article will probably not go back to read it again.

    A little more than a year ago, I wrote The Genetic Code - how to read the DNA record, and recently added some details to a paragraph and expanded the conclusion of the article. So here is the amended paragraph and the expanded conclusion.

    The original article gave the impression that only the transfer RNA (tRNA) molecules define the genetic code. Actually, other, larger, molecules are also involved. So the amended paragraph clarifies this:

    The key elements of translation are small transfer RNA (tRNA) molecules. Each kind of tRNA molecule has a region called the anticodon that can recognize and attach to a particular codon of a messenger RNA (mRNA) molecule. The tRNA molecule has another region called the "3' terminal" that attaches to a particular amino acid. This attachment is aided by molecules called aminoacyl-tRNA synthetases, of which there is generally one kind for each kind of amino acid. There are even helper molecules that provide a proofreading function to detect and correct any translation errors.

    (Actually, there are some variations of this, but discussing these would be distracting. There are also many other types of complex molecules that control the code-translation process but do not define the genetic code -- another subject.)

    Then I expanded the conclusion:

    Where does the genetic code come from? It is not the result of chemistry or any laws of physics. It is determined by the set of tRNA molecule types, and aminoacyl-tRNA synthetase types, which are constructed according to DNA information, which encodes not only the building materials and the building plans, but also the building tools and the building methods. In other words, the genetic code is just information that has always been there since life began.

    The number of possible genetic codes is a huge number, 85 digits long:

    1,510,109,515,792,918,244,116,781,339,315,785,081,841,294, 607,960,614,956,302,330,123,544,242,628,820,336,640,000

    and all of these many codes would work equally well. But all of life uses just one genetic code, about 280 bits of information, the one that scientists Watson and Crick discovered in 1953, but was there since creation. The theory of evolution has no explanation for how the genetic code began, because it can't explain how information can arise from no information. Nor can it explain why there is only one genetic code (out of such a huge number of equally workable codes), even though there is extreme variation of everything else. The mechanism of the present genetic code is very complex; and evolutionary theory supposes that it randomly evolved from a simpler, smaller code. But because there are so many equally viable genetic codes, random evolution should have produced species with many different codes. The evolutionary explanation is far more unlikely than dumping a bucketful of dice on the floor and expecting them to all land with the same number up.

    The creationist explanation is that the universal genetic code is like a signature of the creator, who chose a uniform code for all of the designs of life. A short story will illustrate the principle:

    During the Cold War, Russia was suspected of stealing American technology. Proof came when some Russian war equipment given to a third country was captured and examined. It contained an integrated circuit that was identical to an American design. It is theoretically possible that the Russians had the same design concept, leading to a similar design. But digital circuits have thousands of component parts connected by thousands of wires. There trillions of ways to position the parts on the chip and trillions of ways to route the connecting wires that work equally well. It would be impossible for the Russians to independently produce the same positions and routings even if the logical design were identical. But examination showed the details were identical, even details left over from correcting wiring errors. In effect, there was an American 'signature' in the copied design.

    For the Math fans, I'll add a footnote on how that 85-digit number was calculated:

    That big number counts the number of ways that the 64 codons can be mapped to 21 interpretations, or interpreted as 21 'messages'. One message is to start with a Methionine (or add a Methionine if already started); one is to stop, and the other 19 messages are to add one of the other 19 amino acids [to the peptide chain that will fold into a protein molecule]. This 64-to-21 mapping can be enumerated in two steps:

    First, we count the number of partitions of a set of 64 items into 21 non-empty, pair-wise disjoint subsets. In plain language, this means that:
    • Together, the 21 subsets must contain all of the 64 codons.
    • Each codon must be assigned to only one subset.
    • None of the subsets can be empty; each must contain at least one codon.
    This count is calculated by a mathematical function called the Sterling number of the second kind, which is S(64, 21) in this case.

    Second, we need to count the number of ways that the 21 subsets can be mapped to the 21 messages. This the number of permutations of 21 things, which is 21 factorial, written 21!

    So the desired number is S(64, 21) times 21! But typical computer hardware cannot directly compute numbers that large. Special software that partitions a big number into slices small enough for the hardware is needed. When I was designing special hardware for very large integers (for public key cryptography; I have two patents, #4,658,094 and #5,289,397, for that), I wrote such software so that I could test and verify my designs. So I used my 'BigInt' software to do the arithmetic.

    Monday, January 25, 2010

    Dawkins' Confession

    I found this video showing Gary DeMar of The American Vision discussing Richard Dawkins' new book, The Greatest Show on Earth:

    DeMar points out some interesting quotes from Dawkins' book which I have reproduced below, and will comment on each.

    Many churches, and even parachurch organizations, each have a 'statement of faith', or 'confession of faith' whereby they define their core beliefs. It seems that in the beginning of his book, Dawkins gives his 'confession of faith', beginning with:
    "It is the plain truth that we are cousins of chimpanzees, somewhat more distant cousins of monkeys, more distant cousins still of aardvarks and manatees, yet more distant cousins of bananas and turnips..."
    Notice that he speaks of cousins, not brothers, because brothers, mothers, and fathers are not to be found. He continues:
    "Evolution is a fact, and [my book The Greatest Show on Earth] will demonstrate it. No reputable scientist disputes it..."
    Speaking of evolution as a fact doesn't sound, at first, like a statement of faith, but given his admission of lack of evidence (quoted later), it seems that what he really means by this is that he believes so fervently in evolution that it seems like a fact to him. Thinking of my own faith, I know the feeling.

    He also promises that his book will demonstrate the 'fact' of evolution, but no real demonstration of this is possible. There is experimental demonstration, where one sets up initial conditions, controls, and measurements on real physical objects, living or not. But the evolution that relates man to turnips is an interpretation of the past, and no part of it has been experimentally demonstrated in modern times. Parenthetically --
    We perhaps may need, at this point, to explain to some readers the distinction between macro-evolution, also called goo-to-you evolution, and micro-evolution, the kind that when guided by man breeds cats to get more kinds of cats, but never dogs, and breeds dogs to get more kinds of dogs, but never cats. Creationists believe in micro-evolution, and that's not debated here. The relevance here is that experimental demonstrations have been applied to micro-evolution, but not macro-evolution, which remains in the realm of story-telling.
    There is also logical demonstration, which in its most reliable form is a formal proof. But the lack of evidence, which Dawkins admits to, precludes logical demonstration of macro-evolution.

    His statement "No reputable scientist disputes it" is a tautology in disguise. There are many reputable scientists that dispute evolution, but to evolutionists like Dawkins, that defines them as not reputable.

    As though to illustrate the fervency of Dawkins' faith, the next quote sounds like an enthusiastic description of a miracle:
    "The universe could so easily have remained lifeless and simple -- just physics and chemistry, just the scattered dust of the cosmic explosion that gave birth to time and space. The fact that it did not -- the fact that life evolved literally out of nothing -- is a fact so staggering that I would be mad to attempt words to do it justice. And even that is not the end of the matter. Not only did evolution happen: it eventually led to beings capable of comprehending the process by which they comprehend it."
    His phrase "lifeless and simple" is similar to Genesis 1:2, where the earth is described as "formless and empty" before God gives it form and fills it with life; but in Dawkins' account, God gets no credit.

    His description of dust giving "birth to time and space" contradicts physics as we now know it. According to modern physics, matter cannot exists separately from time and space, and vice versa.

    Again he uses the word 'fact' to refer to his interpretation of facts. But I would agree that the idea that "life evolved literally out of nothing" is staggering -- so much so that one would be mad to believe it.

    If you still doubt that Dawkins' words are a 'confession of faith', read this quote:
    "We have no evidence about what the first step on making life was, but we do know the kind of step it must have been. It must have been whatever it took to get natural selection started."
    In other words, Dawkins knows that there must have been an event when life began, there must have a 'first cause' that caused it to begin, and he knows that he has no evidence of how it began. He is unwilling to believe that God was that cause, so he resorts to a tautology: "It must have been whatever it took".

    Given the huge amount of information and artful design that we now observe in all living things, requiring enormous intelligence, I'd say it must have been God -- it took God to get natural selection started. And by God's account, He created various kinds of living things, so natural selection started on some collection of kinds, rather than one kind of life. And the experimental evidence is that even when we give natural selection an extra push, and the advantage of our intelligence, we can't change cats into dogs, or vice versa, let alone turning turnips into chimpanzees.

    I think my faith fits the evidence better.