AI as search

State-Space

 
  Space in state-space represents an aspect of the problem that one is interested in (abstraction).  A State represents an instant of the event, for example, an event occurs, in the world related to problem that one is interested in. A State is changed to another one by "operators". 

Search State-Space

  From an initial state, applying "operators" change it to another state. A graph of these connected states is State-Space. At least one of the node in this graph represents the "goal" state, for example, the solution to the problem. A problem is "solved" by searching a path from the initial state to the goal state. This path represents how to apply a sequence of operations such that starting from an initial state, the goal state is reached.

  This is how "searching" (in state-space) is used to "solve" a problem. In other words, searching in state-space is a main method to achieve an "intelligent program".

Example of state-space

A game of 15-puzzle is played by moving tiles on the board so that from a given initial configuration, one achieves the final configuration (that all tiles are ordered from 1..15).

A State can be a configuration of the board. The operator is the movement of tiles.  Because the movement involves shifting the "empty" tile around, the movements of this tile can be used to represent the "operators", i.e. move up, down, left, right.  These operators will change one configuration of the board to another one.

A graph (or a tree) of the connected configurations (a node) through links which represented movements (up, down, left, right) is the state-space.

Searching (from an initial state) for a goal state (the configuration that the tiles ordered 1..15) in this state-space will return a path. This can be regarded as "solving" this puzzle.

There are many ways to search the state-space for the goal state. Simple methods do not require much knowledge about the problem. Advanced methods use "heuristic" to determine the order of the search.

Blind search


By blind search, no specific knowledge about the problem is required, for example, Depth-First-Search is a blind search.  It expands a parent node and follows the first child.  Breadth-First-Search expands a parent node and follows all its children before goes into the deeper level (the grand children).  The main loop is as follows:

let buf be a buffer storing the nodes under investigation.  A history list is a list of already visited nodes.

search
  put the start node into buf
  while buf is not empty
(1)    current = pick the first item in buf
    children = expand current
    if  children contain goal
      then stop with success
(2)    update buf with children
    add current to history

Two important choices are made here. (1) pick the first item in buf and (2) how to update buf.  These choices determine the order of visiting nodes.  If all children are put in "front" of buf, the search order becomes Depth-First.  If put all children at the "back" of buf, the order is Breadth-First. 

Which order is more efficient depends on the "shape" of the state-space.  By "efficiency" one means the number of nodes visited.  However, Breadth-First-Search has one important property that it will find the "shortest" path from the start node to the goal node.

Heuristic search

To improve the efficiency, one can incorporate knowledge about the problem, a kind of "hint", to order the search.

This knowledge is used to determine which children is more promising and should be visited first. This "bias" can be computed from a heuristic function.  A heuristic function is a function that is applied to a state and returns a "goodness" value of that state. 

An example of 15-puzzle heuristic function can be a sum of distance (in terms of the number of row and column) of a configuration and a goal configuration.  That is, how far a configuration is from the goal.  By counting how far each tile is from its final position a total sum of distance can be a good guide which configuration should be explored first.

Proof that "search" will find a goal.

assuming that when "update buf", it never adds the children that are similar to any node that has been seen before (in the history list and in the buf). In other words, "search" will not fall into an infinite loop.

With the above assumption, "search" will find a goal state (if it exists) by exploring each node at most once. And because each node will appear (being updated to buf) in buf only once and when it is "expanded" it will be moved to the history list, eventually all nodes will be explored and "search" is terminated.

A "bad" heuristic can not prevent the search to find a goal.  When a heuristic does not help to "improve" the seach order, the search will still be successful because no matter what order of the search is, all nodes will be explored.  At worst, one can think of an unsuccesful heuristic that it is as a good as a blind search.

A* search

(coming soon)


last update 1 Nov 2012