Can a Machine Learn From Another Machine?

 Artificial Intelligence is a new frontier for machines. There is some hope that machines will eventually learn for themselves. There is a dancing robot, and a dog that walks. Atheists can fall to the trap of thinking that this proves that man is just a complicated machine. He might be, but this isn't sufficient to prove it. The hope is that Artificial Intelligence can learn to teach itself. For some people this is actually a fear. That would be a big step in arguing that machines are like humans. Because Christians doubt that humans are machines, they are uniquely suited to question this hypothesis.

For an experiment, take two Chess engines; start them at a reasonable level, and watch them play each other. If neither has been "trained," by playing against a human, they will play to a stalemate. Repeating the process does not improve their play.

This is because Chess engines are designed to start learning from scratch. Some Chess engines are programmed to never excel the human they are playing against, although the human can still lose, through bad play. But good Chess engines are programmed to learn the opponent, and eventually apply all the rules of the opponent more consistently than the human opponent can himself duplicate. Then the human "teacher" eventually loses. By adding teachers or letting the engine learn different opponents, the chess engine, on the steroid of high speed processors, can eventually beat almost any human. Only a human with new ingenuity can beat the machine. 

By substituting a well trained chess engine for the human, we can speculate if the "student" engine would ever surpass the "teaching" engine. I would postulate that this would only be possible by improving the CPU power, or changing the learning algorithm of the "student" engine.

What does this mean about Learning Algorithms and Artificial Intelligence? I postulate that it means that Artificial Intelligence can only reach a certain level of sophistication. It can only learn new things from a mind that can give it new problems to solve. It may even need a human to solve them.

My best suggestion for an avenue of advancement is to automate the rules from the book, "How to Solve It," by G. Polya, teach it an axiomatic set of algorithms, and see if a machine can prove a new theorem.

As a proof of concept, you could provide a set of axioms to the artificial engine; Counting, Addition and Subtraction, And, Not, OR and XOR, and see if it can learn multiplication from addition. Multiplication is not axiomatic.

Machines may be able to learn a great many things and complete remarkable tasks, but one learning machine may never be able to outperform human ingenuity, by learning from each other machines.

This is important to address the fear that machines will one day take over from humanity and exterminate them from malice. Only human malice can do that!

Comments

Popular posts from this blog

A Question About Erasthmus' Sieve

Notice of corrupted results: Vigenere may yet be found to be a "group."

A Simple Rule for the Stock Market