Sunday, September 05, 2010

Can Artificial Intelligence be Evolved?

Now and then we see a claim that an evolutionary program has advanced the pursuit of artificial intelligence (AI). Because the degree of 'intelligence' is invariably minuscule compared to advances in AI using non-evolutionary methods, the report will typically use words such as "..evolved to produce basic intelligence" and "it is hoped that the discovery may in future.." That is, even though the 'intelligence' could be demonstrated by a pre-schooler, there is great hope of super-human intelligence down the road.

Usually the reports lack the detail required for critical review. Partly, this is justified because there is so much detail involved that it is not practical to publish everything. But usually there is not even complete disclosure at a functional level.

For example, suppose it is claimed that the 'artificial life-form' developed the use of memory. It could be that the simulated system was simply given the opportunity to do a task with or without memory, and it found that using memory led to greater success. Well, you don't need an evolutionary algorithm to do that. But without disclosing a functional description, the reader can be left with the impression that a memory mechanism was 'evolved'.

For many readers, AI is a great mystery, and the reader has no way to judge whether such reports are overly optimistic or not. So here I will try to remove much of the mystery by providing an overview without getting too deeply into the math and logic.

An Overview of AI

AI is a broad field of study, and not all researchers or developers have the same goals, nor use the same methods.

The process of intelligent thinking is generally described as having two parts: analysis (taking apart) and synthesis (putting together). Closely related to these, the terms induction (logically proceeding from the specific to the generic) and deduction (from generic to specific) are also used. Some AI efforts focus on analysis, some on synthesis, and some on both.

For example, one project focused on using highly abstract formal language to build a data base of 'expert knowledge' garnered from doctors (who did most of the analysis) to synthesize an 'artificial expert' to diagnose diseases, thus simulating a team of medical experts.

Fields of practical science generally have two sub-fields of endeavor: research and development. Research seeks to discover new principles and methods, and development seeks to find effective ways to use the new principles and methods to accomplish practical purposes. Some AI efforts focus on research, some on development, and some on both.

There is a wide variety of methods used to try to create artificial intelligence. Usually an AI project focuses on one method, but sometimes methods are combined. Some methods are attempts to mimic natural patterns or structures.

The 'evolutionary' (selective adaptation) algorithms are in this category. Some model biological selective adaptation closely, and some more loosely, using the 'evolution' concept more as inspiration. It seems to depend on the motive. The motive may be theoretical -- to prove evolution -- and they may talk of "intelligent agents". Or the motive may be practical -- to provide better computing -- and they may talk of "intelligent machines" instead.

Another AI method that mimics nature is neural networks. I remember that the early research in this area focused closely on modelling the operation of actual neurons, trying to understand how they worked. Some used software models, and others built circuits that mimicked neurons. But these early models were very complex, so they chose simpler models so that they could build larger networks.

Other AI methods seek to borrow and adapt the mental methods that people use to reason and solve problems. These AI systems are primarily rule-based -- instead of just handling data that represent facts, they use lists of rules, including rules for choosing rules, or making new rules from other rules, etc. They use category theory, means-end analysis, and planning strategies to try to construct a logical network connecting known facts to a target question. These rule-based systems depend heavily on very abstract formal languages to describe relationships, categories, and attributes of objects.

As an engineer and programmer who has seen up close the development of computing from the days when transistors were first used, I see the rule-based AI methods as a natural extension of the development of computing.

For example, suppose the solution of a problem requires us to determine the length H of the hypotenuse (longest side) of a right triangle when we know the lengths A and B of the shorter sides. To find the answer for a particular case, all we need to do is arithmetic. (I'm including finding square roots as arithmetic.) A machine that can do arithmetic for us is called a calculator.
But to express how to solve all such problems, we use an algebraic expression to say that H is the square root of the sum of A squared and B squared. We have gone to a higher level of abstraction -- from describing the solution of one problem to describing the solution of a class of similar problems. A machine that can do arithmetic for us, guided by an algebraic expression (a formula) is called a programmable calculator.

Now suppose that we need to know how to compute length B when we know length H and length A. This requires a different algebraic expression, which can be derived from the expression that we described earlier. A programmer that knows algebra can manipulate the first expression to derive the second expression, then write another program to solve this new kind of problem. But suppose that we require that the computer should do this algebraic manipulation? This is a different matter. Instead of merely writing software that can interpret an algebraic expression to do the correct arithmetic procedure, the programmer must write software that "knows how to do algebra", that is, to manipulate algebraic expressions. Now we have stepped up to an even higher level of abstraction.

Years ago, I bought a program called MathCad (from Mathsoft) that "knows how to do algebra" -- and calculus, statistics, matrix algebra, graphs, and many other mathematical techniques. It is so good at this that the program taught me math that I hadn't learned in college. There is a similar program named Mathematica produced by Wolfram Research, which is more powerful (and expensive).

Now, Wolfram Research is developing an even 'smarter' program, making the current "Wolfram Alpha" available on the Internet. It has access to a wide variety of scientific data. For example, you can enter "amino acids" and it will list the 20 kinds. Enter "weights of amino acids", and it will assume you meant atomic weights, and tell you the highest, lowest and median values. Better yet, it knows how to interpret these facts. Enter "distance from Venus to Mars", and it will consult its data about the planetary system, and report that right NOW, the distance is 148.7 million miles (and in other units) and that it takes 13 minutes for light to travel that distance in empty space. It's an even higher level of abstraction, without evolution, just more abstract rules.

Conclusion

In summary, the rule-based style of AI has been far more successful in a practical way (accomplishing smarter computing than ever before) than the neural networks and evolutionary algorithms. The neural and evolutionary strategies are pursued not for near-term practical benefit, but on theoretical grounds.

The neural networks are pursued to try to demonstrate that brain-like structures can produce artificial 'thought', in contrast to philosophers who see the brain as not the producer of thought, but more like the soul's keyboard. After decades of research, progress has been painstakingly slow, and results very limited.

The evolutionary strategies are pursued to try to demonstrate that evolution can produce design. But so far, the results only demonstrate what selective adaptation does in the biological world -- namely, to adjust and adapt a design within the confines of the resources already provided within the design.

To designers, such as myself, the reason why the rule-based systems are far more successful is obvious: they are compatible with the top-down principles of design, which works from well-defined purposes toward increasingly more-detailed design. The other methods attempt to achieve design bottom-up, starting with the details and working toward a goal that is not defined, with no strategy as to how to get there. It's implicitly based on the myth that randomness magically produces information, or on the concept of a 'learning machine'. When a design IS 'found', AND the researchers allow you to look at their software, it becomes evident that the result was actually designed into the software. When you hide Easter eggs and search randomly, you might actually find Easter eggs.

But 'learning machines' are inherently complex, and must themselves be designed. Also a 'learning machine' is just an optimizer that finds the 'best' within some domain that is limited by the design. And the principle of irreducible complexity is a huge hurdle that blocks the bottom-up approach. When probabilities are computed for achieving complex designs by random methods, they invariably turn out to be practically zero.

What is "practically zero"? I will define it as 1 divided by a very large number. So what is "a very large number"? In the physical world, it is hard to get numbers larger than about 100 digits. For example, if you estimate the ratio of the mass of the observable universe to the mass of the electron, you get only an 84-digit number. But when you compute the probability of getting some irreducibly complex design by a random method, and express it as 1 divided by X, then X is typically thousands of digits long.

That generally means that the universe doesn't have enough material and enough time for the random experiment to succeed. That's practically zero.

No comments: