# Robotic Intelligence

6 Feb 2006

lecture in 2 parts:
What is AI?
AI programming

## What is AI?

Logic, plan, philosophy
symbolic vs non-symbolic
Physical Symbol Hypothesis

## How to make a machine think?

### logic

predicate logic

Father(prabhas,som)  has truth value: true

Close world assumption, what is not true is false.

Father(x,y)   first order logic, has variables.
variables can have scope.   there exist, for all

for all x   Prime(x)
there exists x  Divisible(x,3)

Grandfather(x,y) is  Father(x,z) and Father(z,y)

This is a logic deduction.

To reason about fact, one can use deductive logic from first order and predicate logic.  Higher order logic allows predicate to be variables.  Any higher order logic can be transformed into first order logic.

### Telling a story  (Thai folklore)

In and Na are both fisherman.  They are friends.  One day, In and Na go fishing together.  They caught a fish.  They argue on dividing that fish.  They ask Yu to divide the fish for them.  Yu divides a fish into head, tail, and middle.  Yu took the best part, the middle.

Fisherman(In)
Fisherman(Na)
Friend(In, Na)
Caught_fish(In,fish123)
Caught_fish(Na,fish123)
not Divide_fish(In,fish123)
not Divide_fish(Na,fish123)
Divide_fish(Yu,fish123)
Own(Yu,middle)

We can ask questions about the story, such as, what In and Na got?  (not the middle).  We can infer that being friends is having the same occupation, fisherman.

Friend(x,y) is  Fisherman(x) and Fisherman(y).

We can ask who caught the fish?

Caught_fish(x,fish123)    x = In, Na

A computer can perform deduction using a mechanism called "resolution",  invented by Robinson (a mathematician) around 1960.  Robinson invented a step-by-step method to arrive at conclusion from premises which can be implemented in a computer.  The idea is simply:

If you want to prove something is true, let it be P.
Use "not P" to prove by contradiction that "not P" can not be a logical consequent from premises.  Therefore, P is true.

This is a bit counter intuitive but proving by contradiction is a very powerful prove method used widely in logic.  (another extremely useful prove style is induction proof).

Beside using logic for deduction, we can "synthesis" using logic.  This is called synthesis by proof.  Let give an example on designing a digital device.   We begin with an obviously correct specification of what we want the digital device to be, let say a digital watch.  We write a model of the device.  It will be a few sentences in logic to describe the target device.  Our watch does counting:

Count(t+1) = Count(t) + 1     and this will happen once a second.
Minute(t+1,x) = if Count(t) > 60 then x = x+1 and reset count

(a few more sentences to take care of details)

Then from these sentences, we "prove" by transforming them using a "sound logical rules" into another sentences.  In some finite steps, we "rewrite" the original specification into "grounded sentences".  A grounded sentence composes only of the predicate that is known to be implementable in hardware such as, { and, or, not} gates. They will become, counter, decoder, multiplexor, etc.

The "correctness" of the final "grounded sentences" is secured by the fact that each transformation step preserves the correctness (by being "sound logic rules").   The advantage of this method is that the final device need not be tested to be correct in terms of design.  However, in terms of fabrication (microelectronics) it is not "prove" to be correct.  But, it is still an enormous saving cost not having to test all functionality of the device, only electrical characteristics need to be tested.  The impracticality of this method is that "rewritting" the spec. into "grounded sentences" is not always automatic.  It is combinatoric.  (i.e. the choice exploded so that it is impossible to find good choice)  Therefore, human assistance is required.   The current state of the art technique makes small scale industrial applications possible,like proving the communication protocol or even a simple microprocessor.  Even a simple microprocessor, it has thousands of axiom to begin with.

## Planning

Planning is to achieve a goal by searching for a group of sentences that satisfied the goal (make the goal true).    Example  from a block world:

goal   On(c,b) and On(b,a) and On(a,table)

building a tower of block c (on top) b and a (on table).
The primitives to manipulate this world can be:
put(x,y)   reads  put object x on object y  (y can be block or table, x must be block)
pick(x)     reads  the robot hand picks up block x

if initially the world is :

On(a,table)  and On(b,table)  and On(c,b)

then a plan to build the tower can be:

pick(c)   put(c,table)     ; clear top of b
pick(b)  put(b,a)
pick(c)  put(c,b)

Planning is to search for such sequence of action to achieve the goal state.  It is quite easy to get a "planner" stuck at some state which can not apply any primitive to move to other state.  It depends on heuristic how to guide the plan to achieve a goal.  One famous problem is called "instant insanity" where the planner gets stuck in a loop and cannot make any progress toward the goal.
The above example can illustrate the case.  If we use a heuristic that a planner should try to achieve the goal by counting the "correct" relation of blocks,

goal   On(c,b) and On(b,a) and On(a,table)

Each of these goals when achieve gives one point.
If the planner chooses action which maximise points, then "clear top of b" will not be chosen because it reduces the point!

To apply logic to a wide real-world we need "logic of common sense".  John McCarthy is the main figure who attack this problem.  You can read many of his master work from his homepage.

I think logic allone will not take us very far on the road towards intelligent machines.  To enable machines to do more things, we will need to give them "motivation".    Emotion is one of the most interesting topic in AI.  How to give machines emotion?  How we understand emotion in human?   Can we reason about emotion?  Marvin Minsky, who is called (together with McCarthy) "father of AI", is very much interested in such topics.  Most of his work concentrate on understanding emotion, human jokes, how children think about playing with blocks, how our memory work, etc. He had written a famous book "Society of mind" explained his theory.  He is in the process of writting a new book on "emotional machines". You can read all his work from his homepage.

The  progress in logic is towards new kind of logic.  "non monotonic logic"  "causal logic".  Shoham invented "logic of cause and reason".  (read his homepage)   These new logic can be applicable to many "abstract" idea such as "time", "belief" etc.

Another idea worthy consideration is that the East thinking is different from the West.   For example, in Buddhist teaching about mind, we do not have "exclusion of the middle" like in western logic.  In western logic, the truth value is either true or false,  it can not be both.   For East, we do have neither good nor bad.

## Philosophy

The grandest question in AI is still "Can a machine think?"  This is not a scientific question.  It is a pholosiphical question.  Many people believe that if we can define "thinking" then we can answer the question.   I believe that for the thing like "thinking", it is unlilkely that we will have a "definition" that everyone will agree upon and hence answering the question is still an open problem.

Alan Turing deviced an 'test" (now called Turing test).  It is a thought experiment.   There are two rooms, in one room a human, in another room, a human or a computer.  The person in the first room communicates with a person/or a computer in another room via typing (let make it a modern version, via "chat").  If the human in the first room after a while chatting with a human/computer in another room cannot tell whether he/she is conversing with a human or a machine.  Then, if it is the computer in another room which does all the correspondences, we say the computer is as "intelligent" as a human (because a human can not tell it apart from a fellow human).

Let us dispense with many minor details (such as whether there is limit on the topic of conversasion), the thought experiment above caught imagination of people.  People are enthusiastically debate about this experiment.    In some contest around 1990, many computer programs had passed the Turing test.   (they fool human to think that it is a human who chat to them).  Some people even invented a more rigourous test called Total Turing Test, or TTT that is much harder for a machine to pass.

Of course, there are strong protaganist of AI.  Especially a strong version of AI.   I need to explain: weak AI and strong AI first.  Weak AI denotes the belief that AI can "mimick" human intelligent (they will fool people) but if looking closely we can detect.  This makes AI useful but not claiming that machines can really think.  Strong AI holds belief that a machine can really think, machines can do the same task that human define as "thinking".

The most famous protaganist of AI is John Searl.  He is a philosopher.  He holds the position that machines cannot think.  He published his most debated work proposing a scenario called "Chinese room" to argue that machines cannot think.  Let hear his argument:

### Chinese Room

There is a room that will accept an input in form of questions written in Chinese on a piece of paper.  The room will process the question and answer back by writing the answer on a piece of paper in Chinese.   From what people see from outside, this room can "think", it gives the correct answer every times.

Inside the room, there is an English gentleman, he knows nothing of Chinese language.  What he does is to look up the instruction in the room which is full of English book of rules how to answer the question.  He follows those instructions rigourously.  The rule tells him how to answer, which Chinese characters to write etc.  So, without knowing the meaning of the text he can answer the question.

There is no "intelligent" no "understanding" in Chinese room.  Therefore, a machine cannot think.

This is a superb piece of writing.  It took on every AI experts, and I have to say, 26 years has passed, thousands of paper debated about it and still  no conclusion can be found.   (McCarthy has a reply in his collection of writing "Philosophical Debate of AI").

I think the value of this philosophical "gymnastic" is not in finding the truth (whatever it is, the truth about "thinking"?).  It is about "searching" for the meaning.  As long as people are intrique about "machines that think" then there will be people who want to think deeply about it.  More people will do AI.  Young people will do AI.  The progress will be made in due time.

## References

<will update references soon>

Basic AI text book
Classic work of McCarthy, Minsky
Alan Turing, Turing machine, Turing test
Searle's Brain Behaviour, Science original paper
Shoham

End of the first part of lecture

Part 2   AI programming

Prabhas Chongstitvatana