Search (Subsystem of AIMA Code)

The search subsystem contains code from part II on problem solving, search, and game-playing. The main data type is the problem. each new type of problem needs a representation for states, a successor function, and a goal test. You can find examples of this in the domains subdirectory.


search/: search/algorithms/: search/environments/: search/domains/: search/agents/:

File search/test-search.lisp

Test Cases for Search


File search/algorithms/problems.lisp

Defining Problems

problem type (initial-state goal num-expanded iterative?)

A problem is defined by the initial state, and the type of problem it is. We will be defining subtypes of PROBLEM later on. For bookkeeping, we count the number of nodes expanded. Note that the three other fields from the book's definition [p 60] have become generic functions; see below.
When we define a new subtype of problem, we need to define a SUCCESSORS method. We may need to define methods for GOAL-TEST, H-COST, and EDGE-COST, but they have default methods which may be appropriate.

successors method ((problem problem) state)

Return an alist of (action . state) pairs, reachable from this state.

goal-test method ((problem problem) state)

Return true or false: is this state a goal state? This default method checks if the state is equal to the state stored in the problem-goal slot. You will need to define your own method if there are multiple goals, or if you need to compare them with something other than EQUAL.

h-cost method ((problem problem) state)

The estimated cost from state to a goal for this problem. If you don't overestimate, then A* will always find optimal solutions. The default estimate is always 0, which certainly doesn't overestimate.

edge-cost method ((problem problem) node action state)

The cost of going from one node to the next state by taking action. This default method counts 1 for every action. Provide a method for this if your subtype of problem has a different idea of the cost of a step.

Manipulating Nodes

node type (state parent action successors unexpanded depth g-cost h-cost f-cost expanded? completed?)

Node for generic search. A node contains a state, a domain-specific representation of a point in the search space. A node also contains bookkeeping information such as the cost so far (g-cost) and estimated cost to go (h-cost). [p 72]

expand function (node problem)

Generate a list of all the nodes that can be reached from a node.

create-start-node function (problem)

Make the starting node, corresponding to the problem's initial state.

Manipulating Solutions

Solutions are represented just by the node at the end of the path. The function SOLUTION-ACTIONS returns a list of actions that get there. It would be problematic to represent solutions directly by this list of actions, because then we couldn't tell a solution with no actions from a failure to find a solution.

solution-actions function (node &optional actions-so-far)

Return a list of actions that will lead to the node's state.

solution-nodes function (node &optional nodes-so-far)

Return a list of the nodes along the path to the solution.

solve function (problem &optional algorithm)

Print a list of actions that will solve the problem (if possible). Return the node that solves the problem, or nil.

print-solution function (problem node)

Print a table of the actions and states leading up to a solution.

Comparing Algorithms

compare-search-algorithms function (problem-fn algorithms &key n)

Run each algorithm on N problems (as generated by problem-fn) and compare the results for nodes expanded and for path cost.

Printing

print-structure method ((node node) stream)

display-expand method ((problem problem) node)

Possibly print information when a node is expanded.

File search/algorithms/simple.lisp

Simple Search Algorithms

Here we define the GENERAL-SEARCH function, and then a set of search functions that follow specific search strategies. None of these algorithms worries about repeated states in the search.

general-search function (problem queuing-fn)

Expand nodes according to the specification of PROBLEM until we find a solution or run out of nodes to expand. The QUEUING-FN decides which nodes to look at first. [p 73]

breadth-first-search function (problem)

Search the shallowest nodes in the search tree first. [p 74]

depth-first-search function (problem)

Search the deepest nodes in the search tree first. [p 78]

iterative-deepening-search function (problem)

Do a series of depth-limited searches, increasing depth each time. [p 79]

depth-limited-search function (problem &optional limit node)

Search depth-first, but only up to LIMIT branches deep in the tree.

Search Algorithms That Use Heuristic Information

best-first-search function (problem eval-fn)

Search the nodes with the best evaluation first. [p 93]

greedy-search function (problem)

Best-first search using H (heuristic distance to goal). [p 93]

tree-a*-search function (problem)

Best-first search using estimated total cost, or (F = G + H). [p 97]

uniform-cost-search function (problem)

Best-first search using the node's depth as its cost. Discussion on [p 75]

Utility Function

make-initial-queue function (problem queuing-fn)


File search/algorithms/repeated.lisp

Search Algorithms That Avoid Repeated States

In this file we show algorithms that worry about repeated states. Here are the three ways to deal with repeated states, from [p 82]:

eliminate-returns function (nodes)

Get rid of nodes that return to the state they just came from, i.e., where the last two actions just undo each other.

eliminate-cycles function (nodes)

Get rid of nodes that end in a state that has appeared before in the path.

eliminate-all-duplicates function (nodes node-table)

Get rid of all nodes that have been seen before in any path.
Here are examples of search algorithms that use these methods. In retrospect, a better organization would have been to have GENERAL-SEARCH take two arguments, a problem and a strategy, where the strategy would have a queueing function and an expansion function as components. That way, we wouldn't have EXPAND generate nodes that we are just going to throw away anyway.

no-cycles-depth-first-search function (problem)

Do depth-first search, but eliminate paths with repeated states.

no-returns-breadth-first-search function (problem)

Do breadth-first search, but eliminate immediate returns to a prior state.

no-duplicates-breadth-first-search function (problem)

Do breadth-first search, but eliminate all duplicate states.

a*-search function (problem)

Search the nodes with the best f cost first. If a node is ever reached by two different paths, keep only the better path.

make-eliminating-queuing-fn function (eval-fn)

Auxiliary Functions

looping-node? function (node &optional depth)

Did this node's state appear previously in the path?

return-node? function (node)

Is this a node that returns to the state it just came from?

File search/algorithms/csp.lisp

Definition of CSPs (Constraint Satisfaction Problems).

csp-problem type (forward-checking? legality-checking? variable-selector)

A Constraint Satisfaction Problem involves filling in values for variables. We will use a CSP-state structure to represent this.
All CSPs use integers as names for both variables and their values. Constraints on variables var1, var2 are represented by a table, indexed by var1, var2, with each entry a list of all allowable pairs of values for var1, var2.

csp-state type (unassigned assigned constraint-fn modified)

csp-var type (name domain value conflicts)

Generic Functions for CSP Problems

goal-test method ((problem csp-problem) node-or-state)

STATE is a goal if all variables are assigned legally.

successors method ((problem csp-problem) s)

Algorithms for Solving Constraint Satisfaction Problems

csp-backtracking-search function (problem &optional queuing-fn)

csp-forward-checking-search function (problem &optional queuing-fn)

Auxiliary Functions

print-structure method ((state csp-state) stream)

csp-legal-statep function (s)

filter-domains function (name value unassigned constraint-fn)

csp-modifications function (s &optional variable-selector-fn)

modify-assignment function (s var name new assigned constraint-fn)

most-constrained-variable function (vars)

random-conflicted-variable function (vars)

min-conflicts-value function (s)

csp-empty-domainp function (s)

csp-legal-values function (name values assigned constraint-fn)

csp-legal-assignmentp function (name value assigned constraint-fn)

csp-explicit-check function (name1 value1 name2 value2 constraints)

csp-random-completion function (s)

csp-conflicts function (var vars constraint-fn)


File search/algorithms/ida.lisp

ida.lisp

Iterative Deepening A* (IDA*) Search

tree-ida*-search function (problem)

Iterative Deepening Tree-A* Search [p 107].

dfs-contour function (node problem f-limit)

Return a solution and a new f-cost limit.

File search/algorithms/iterative.lisp

Iterative Improvement Search Algorithms

Currently these do not do repeated-state checking. Each takes a problem and returns two values: like all search algorithms, the first is a solution node or nil, but the second value will be the best node found so far, even if it is not a solution. We will assume that all evaluations are costs (i.e., we're seeking minima).

Top Level Functions

hill-climbing-search function (problem &optional stopping-criterion)

Search by picking the best successor according to heuristic h. Stops according to stopping-criterion.

simulated-annealing-search function (problem &optional schedule)

Like hill-climbing-search, except that we pick a next node randomly; if it is better, or if the badness of the next node is small and the 'temperature' is large, then we accpet it, otherwise we ignore it. We halt when the temperature, TEMP, hits zero [p 113].

random-restart-search function (problem-fn &optional n)

Random-restart hill-climbing repeatedly calls hill-climbing-search. PROBLEM-FN should return a problem with a random initial state. We look at N different initial states, and keep the best solution found.

hill-climbing-until-flat-n-times-search function (problem &optional n)

Do hill climbing, but stop after no improvement N times in a row.

Auxiliary Functions

local-minimum function (current next)

Stop when the next state is worse than the current.

minimum-or-flat function (current next)

Stop when the next state is no better than the current.

minimum-or-flat-n-times function (n)

Return a function that stops when no improvement is made N times in a row.

csp-termination function (current next)

make-exp-schedule function (&key k lambda limit)

Return an exponential schedule function with time limit.

File search/algorithms/sma.lisp

sma.lisp Currently contains definition for a version of SMA* that operates on search trees (i.e., no repeated-state checking). [[Need to update to eliminate looping when memory is too small and to signal suboptimal solutions when appropriate.]] Although the basic algorithm is quite simple, the bookkeeping is not.

tree-sma function (problem &optional memory-size)

tree-get-next-successor returns the next successor of n, if any (else nil)

tree-get-next-successor function (n q memory-size problem)

tree-backup-f-cost updates the f-cost for a node's ancestors as needed

tree-backup-f-cost function (node q &optional was-open?)

tree-prune-open removes the worst node from the open list. The node is discarded from the open list, and its successors are dumped to recycle memory. If the parent was closed, it must be re-opened, with an updated f-cost (no need to do this until now because it wasn't on the open list anyway). Closed parent or not, the worstnode becomes an unexpanded successor of the parent.

tree-prune-open function (q)

tree-unexpand-successor function (successor parent)

deepest-least-leaf function (q)

shallowest-largest-leaf function (q)

find-leaf function (node)

leafp function (n)

openp function (n)


File search/algorithms/minimax.lisp

Deciding What Move to Make in a Game by Minimax or Alpha-Beta Search

The minimax decision procedure returns the optimal move in the game using exhaustive generation of the entire game tree. Implementation uses the fact that the evaluation and utility functions return a list of values from the point of view of each player, with the "current" player first. Hence, rather than using #'min, we always use #'max for the current player. A successor value is passed up the tree using right-rotation. This works for any number of players. The notation "a+s" means an (action . state) pair.

Minimax

minimax-decision function (state game)

Return the best action, according to backed-up evaluation. Searches the whole game tree, all the way down to the leaves. This takes too much time for all but the simplest games, but it is guaranteed to produce the best action.

minimax-value function (state game)

Minimax with Cutoff

minimax-cutoff-decision function (state game eval-fn limit)

Return the best action, according to backed-up evaluation down to LIMIT. After we search LIMIT levels seep, we use EVAL-FN to provide an estimate of the true value of a state; thus the action may not actually be best.

minimax-cutoff-value function (state game eval-fn limit)

game-successors function (state game)

Return a list of (move . state) pairs that can be reached from this state.

terminal-values function (state)

Return the values of the state for each player.

Alpha-Beta Search

The alpha-beta decision procedure returns the optimal move according to a limited-depth search using the evaluation function. It returns the same action as minimax-cutoff-decision, but examines fewer nodes. This version of alpha-beta works only for two players, and requires that the game is "zero-sum", i.e., the evaluation for one player is the opposite of the evaluation for the other.

alpha-beta-decision function (state game eval-fn &optional limit)

Return the estimated best action, searching up to LIMIT and then applying the EVAL-FN.

alpha-value function (state game alpha beta eval-fn limit)

beta-value function (state game alpha beta eval-fn limit)


File search/environments/games.lisp

Game-Playing

Definitions for all n-player turn-taking games. The idea is that each particular game we want to deal with will define a subtype of the GAME structure, with methods for LEGAL-MOVES, MAKE-MOVE, and GAME-OVER?.

In the description of a game, a player is an atomic name: X or O, or BLACK or WHITE. The current state is kept as an instance of GAME-STATE (which we do not expect to create subtypes of). We use GAME->ENVIRONMENT to turn an instance of a game into an environment, which we can then run. The function RUN-GAME provides a simple interface to do this. Corresponding to player X there is an agent with name X who has an agent program that chooses a move for X, given a game-state as the percept.

game type (initial-state best worst)

A game is defined in terms of the starting position (the initial-state), the rewards for winning and losing, and three generic functions: LEGAL-MOVES: a list of moves that can be made in this state MAKE-MOVE: generate a new state GAME-OVER?: are we finished? You provide methods for these for each new subtype of game.

game-state type (board players scores terminal? previous-move)

Everything you need to know about the state of the game. The players are given as a list of names, with the player whose move it is first. A property list keeps the scores for each player. In some games, scores are accumulated as you go, in others a score is awarded only at the end.

Generic Functions for Games

legal-moves method ((game game) state)

Return a list of legal moves

make-move method ((game game) state move)

Return the new state that results from making this move.

game-over? method ((game game) state)

Is the game finished?

Game-playing environments

game-environment type (game)

game->environment function (game &key agents)

Convert a game into an environment. AGENTS can be a list of agents or agent types.

run-game function (game &key agents)

Build an environment around this game, and run it.

Implementation of Generic Functions for Game Environments

update-fn method ((env game-environment))

performance-measure method ((env game-environment) agent)

Look up the agent's score in the current state.

get-percept method ((env game-environment) agent)

We assume all agents can perceive the whole state.

initialize method ((env game-environment))

Install an agent program (based on the agent's algorithm slot) in each agent. The program makes sure an agent only moves when it is its turn. Also initialize the name of each agent, and the environment's state.

termination? method ((env game-environment))

assign-agent-scores function (state env)

Auxiliary functions

legal-move? function (move state game)

current-player function (game-state)

previous-player function (game-state)

game-players function (game)

current-agent function (env)

agent-with-name function (name env)

print-structure method ((game game) stream)

print-structure method ((state game-state) stream)

display-environment method ((env game-environment))


File search/environments/prob-solve.lisp

Problem-Solving Environments

The basic problem-solving-environment type, and the main generic functions for it.

problem-solving-environment type (problem)

An environment in which to solve problems. The state of the environment is one of the states from the problem, starting with the initial state.

get-percept method ((env problem-solving-environment) agent)

All agents can access the complete state of the environment.

update-fn method ((env problem-solving-environment))

Set the state to the result of executing the agent's action.

performance-measure method ((env problem-solving-environment) agent)

Score of 1 for solving problem; 0 otherwise.

initialize method ((env problem-solving-environment))

Get the initial state from the problem, and supply agents with programs.

problem-solving-program function (search-algorithm problem)

Given a search algorithm, return a program that at the start searches for a solution, then executes the steps of the solution, then stops.

termination? method ((env problem-solving-environment))

Stop when the problem is solved, or when an agent says stop.

Converting a Problem to an Environment

problem->environment function (problem &key algorithm)

Convert a problem into an environment. Then we can pass the environment to RUN-ENVIRONMENT, and the agent will search for a solution and execute it.

print-structure method ((env problem-solving-environment) stream)


File search/domains/cannibals.lisp

The Missionaries and Cannibals Domain

cannibal-problem type nil

The problem is to move M missionaries and C cannibals from one side of a river to another, using B boats that holds at most two people each, in such a way that the cannibals never outnumber the missionaries in any one place. See [p 68].

goal-test method ((problem cannibal-problem) state)

The goal is to have no missionaries or cannibals left on the first side.

successors method ((problem cannibal-problem) state)

Return a list of (action . state) pairs. An action is a triple of the form (delta-m delta-c delta-b), where a positive delta means to move from side 1 to side 2; negative is the opposite. For example, the action (1 0 1) means move one missionary and 1 boat from side 1 to side 2.

cannibal-state type (m1 c1 b1 m2 c2 b2)

The state says how many missionaries, cannibals, and boats on each side. The components m1,c1,b1 stand for the number of missionaries, cannibals and boats, respectively, on the first side of the river. The components m2,c2,b2 are for the other side of the river.

take-the-boat function (state action)

Move a certain number of missionaries, cannibals, and boats (if possible).

cannibals-can-eat? function (state)

The cannibals feast if they outnumber the missionaries on either side.

File search/domains/ttt.lisp

The Game of Tic-Tac-Toe

Generalized Tic-Tac-Toe, in which any number of players take turns placing marks on an NxN board, trying to get K marks in a row. There are much more efficient representations than what we have chosen here, but this suffices to run an environment quickly. If an agent wants to search a large number of possible game states, then the agent should use its own efficient representation. After all, an agent's internal representation is independent of what's "actually" out there in the environment.

ttt-game type (n k)

Define an NxN tic-tac-toe game in which the object is to get K in a row.

make-ttt-game function (&key n k players)

Define an NxN tic-tac-toe game in which the object is to get K in a row.

legal-moves method ((game ttt-game) state)

List all possible legal moves.

make-move method ((game ttt-game) state move)

Return the new state that results from making this move.

game-over? method ((game ttt-game) state)

Checks if the last player to move made a complete row, column, or diagonal of length k, or if the board is full. If so, assign scores and return true; otherwise return nil.

Auxiliary Functions

check-k-in-a-row function (board x y n k dx dy player)

Does player have k in a row, through (x y) in direction (+/-dx +/-dy)?

count-pieces-in-direction function (board x y n dx dy player)

Count player's pieces starting at (x y) going in direction (dx dy).

File search/domains/cognac.lisp

The Game of Cognac

Definitions for a game of uncertain origin reputed to be played by bored cognac-tenders in the cellars. Similar to Tic-Tac-Toe but instead of playing anywhere, one can only play directly above an existing mark or on the bottom row.

cognac-game type nil

Define an NxN tic-tac-toe-like game. The object is to get K in a row.

make-cognac-game function (&key n k players)

Define an NxN Cognac game in which the object is to get K in a row.

legal-moves method ((game cognac-game) state)

List all possible legal moves. Like tic-tac-toe, except in each column you can only move to the lowest unoccupied square.
Everything else is inherited from ttt-game.

File search/domains/nqueens.lisp

The N-Queens Puzzle as a Constraint Satisfaction Problem

nqueens-problem type (n explicit?)

make-nqueens-problem function (&rest args &key n explicit?)

nqueens-initial-state function (n &optional explicit? complete?)

nqueens-constraints function (n)

nqueens-constraint-fn function (var1 val1 var2 val2)


File search/domains/path-planning.lisp

Path Planning in 2 Dimensions with Convex Polygonal Obstacles

path-planning-problem type (scene)

A problem involving moving among polygonal obstacles in 2D space. A state is the current vertex.

make-path-planning-problem function (&key scene)

Define a constructor to build a problem, using the scene properly.

successors method ((problem path-planning-problem) v1)

Return a list of (action . state) pairs, where the state is another vertex that is visible from the current vertex v1, and the action is a delta (dx dy) from v1 to the new one.

edge-cost method ((problem path-planning-problem) node action vertex)

The cost of an action is its distance.

h-cost method ((problem path-planning-problem) vertex)

The heuristic cost is the straight-line distance to the goal.

Defining the Vertex, Line, Polygon and Scene Types

vertex type (xy c-neighbor a-neighbor visible)

print-structure method ((v vertex) stream)

line type (xy1 xy2)

polygon type (vertices n)

scene type (polygons start goal)

Functions for testing whether one vertex is visible from another

vertices-visible-from function (v1 scene)

Find all the vertices that can be seen from this vertex.

vertices-in-view function (v scene)

Find all the other vertices that can be seen from v.

visible-p function (xy1 xy2 scene)

Predicate; return t iff xy1 is visible from xy2.

line-intersects-poly? function (line poly)

Predicate; return t iff line intersects poly.

intersects function (l1 l2)

Code for constructing the scene data structure

create-scene function (&key start goal polygons)

START and GOAL are xy points; polygons is a list of lists of vertices.

create-polygon function (points)

Specific scene, shown as Figure 4.17 [p 120]

*scene-4.17* variable

The scene in Figure 4.17 [p 120] with 8 obstacles.

File search/domains/puzzle8.lisp

The 8-Puzzle Problem

In this implementation of the 8-puzzle we have a mix of priorities between efficiency and simplicity. We restrict ourselves to the 8-puzzle, instead of the general n-puzzle. The representation of states is not the obvious one (a 3x3 array), but it is both efficient and fairly easy to manipulate. We represent each tile as an integer from 0 to 8 (0 for the blank). We also represent each square as an integer from 0 to 8, arranged as follows:

     0 1 2
     3 4 5
     6 7 8
 
Finally, we represent a state (i.e., a complete puzzle) as the sum of the tile numbers times 9 to the power of the tile's square number. For example, the goal state from page 63:
 
     1 2 3                          1*9^0 + 2*9^1 + 3*9^2
     8 . 4  is represented by:    + 8*9^3 + 0*9^4 + 4*9^5
     7 6 5                        + 7*9^6 + 6*9^7 + 5*9^8 = 247893796
 
We represent actions with the four symbols <, >, ^, V to stand for moving the blank tile left, right, up and down, respectively. The heuristic functions implicitly refer to the goal, *8-puzzle-goal*. Call the function USE-8-PUZZLE-GOAL to change the goal state if you wish.

*8-puzzle-goal* variable

To be defined later

8-puzzle-problem type nil

The sliding tile problem known as the 8-puzzle.

successors method ((problem 8-puzzle-problem) state)

Generate the possible moves from an 8-puzzle state.

h-cost method ((problem 8-puzzle-problem) state)

Manhatten, or sum of city block distances. This is h_2 on [p 102].

move-blank function (state from to)

Move the blank from one square to another and return the resulting state. The FROM square must contain the blank; this is not checked.

blank-square function (state)

Find the number of the square where the blank is.

random-8-puzzle-state function (&optional num-moves state)

Return a random state of the 8 puzzle.

random-move-blank function (state)

Return a state derived from this one by a random move.

Representing the Board

neighbors function (square)

The squares that can be reached from a given square.

8-puzzle-legal-moves function (square)

The moves that can be made when the blank is in a given square. This is a list of (direction destination-square) pairs.

8-puzzle-location function (square)

Return the (x y) location of a square number.

Representing States

8-puzzle-state function (&rest pieces)

Define a new state with the specified tiles.

8-puzzle-ref function (state square)

Return the tile that occupies the given square.

8-puzzle-print function (state &optional stream)

8-puzzle-display-fn function (node problem)

Setting Up the Goal

*8-puzzle-goal-locations* variable

A vector indexed by tile numbers, saying where the tile should be.

use-8-puzzle-goal function (goal)

Define a new goal for the 8 puzzle.

8-puzzle-goal-location function (tile)

Return the location where the tile should go.

9-power function (n)

Raise 9 to the nth power, 0 <= n <= 9.

Alternative Heuristic Function

misplaced-tiles function (state)

Number of misplaced tiles. This is h_1 on [p 102].

misplaced-tile? function (state square)

Is the tile in SQUARE different from the corresponding goal tile? Don't count the blank.

File search/domains/route-finding.lisp

Find a Route Between Cities on a Map

route-finding-problem type (map)

The problem of finding a route from one city to another on a map. By default it is a random map. A state in a route-finding problem is just the name of the current city. Note that a more complicated version of this problem would augment the state with considerations of time, gas used, wear on car, tolls to pay, etc.

successors method ((problem route-finding-problem) city-name)

Return a list of (action . new-state) pairs. In this case, the action and the new state are both the name of the city.

edge-cost method ((problem route-finding-problem) node action city)

The edge-cost is the road distance to the next city.

h-cost method ((problem route-finding-problem) city-name)

The heuristic cost is the straight-line distance to the goal.

The City and Map data structures

city type (name loc neighbors)

A city's loc (location) is an (x y) pair. The neighbors slot holds a list of (city-name . distance-along-road) pairs. Be careful to distinguish between a city name and a city structure.

road-distance function (city1 city-name2)

The distance along the road between two cities. The first is a city structure, the second just the name of the intended destination.

straight-distance function (city1 city2)

Distance between two cities on a straight line (as the crow flies).

find-city function (name map)

Look up the city on the map, and return its information.

random-route-map function (&key n-cities width height min-roads max-roads)

Return a random map with n-cities in it, and some roads between them. Each city is connected to between MIN-ROADS and MAX-ROADS other cities. The default is from 2 to 5. The road between any two cities has a length of 1 to 1.5 times the straight-line distance between them.

build-road function (city1 city2)

Construct a road between two cities.

number->name function (i)

Turn an integer into a symbol. 1-26 go to A-Z; beyond that use Ci

The Romanian Map

*romania-map* variable

A representation of the map in Figure 4.2 [p 95]. But note that the straight-line distances to Bucharest are NOT the same.

romanian-problem type nil

A route-finding problem on the Romania map, with random start and goal.

File search/domains/tsp.lisp

The Travelling Salesperson Problem (TSP)

Find a tour: a path that visits every city exactly once, and returns to the starting city. The shorter the total distance, the better. This builds on the map data structure defined in route-finding.lisp. It assumes that the map is a complete graph: there is a path from every city to every other city.

Note: the TSP is NP complete in the general case, but there are some good algorithms for finding approximate solutions, particularly when the triangle inequality is satisfied (that the path from A->C is always shorter than A->B->C). Many of these algorithms are based on the idea of building a minimum spanning tree, converting it into a tour, and perhaps modifying it. We don't go into that here (because we are more interested in hooking up to the general search procedures than in special-purpose algorithms), but note that our tsp-h heuristic function is a relaxed version of a minimum spanning tree.

tsp-problem type (map)

make-tsp-problem function (&key map start)

Constructor for TSP problems. The map must be a complete graph.

edge-cost method ((problem tsp-problem) node action state)

h-cost method ((problem tsp-problem) state)

A lower bound on the cost is the distance to ???

successors method ((problem tsp-problem) state)

Return a list of (action . state) pairs. Actions are just the name of the city to go to. You can only go to a city you haven't visited yet, unless you've visited them all, in which case you can only go back home.

goal-test method ((problem tsp-problem) state)

The goal is to leave no unvisited cities and get back to start.

tsp type (visited to-visit)

A state for a TSP problem lists cities visited, and remaining to see.

Auxiliary Functions

nearest-neighbor-distance function (name candidate-names map)

Find among the CANDIDATE-NAMES of cities, the one that is closest to city NAME, and return the distance to it.

path-lower-bound function (city-names map)

Find a lower bound for a path through these cities.

random-tsp-map function (&key n-cities)

check-tsp-map? function (map)

tsp-city-name function (tsp-state)

The current city: the last one visited.

tsp-start function (tsp-state)


File search/domains/vacuum.lisp

File: search/domains/vacuum.lisp

Definitions for Searching in the Vacuum-World Domain

vacuum-state type (orientation dirt m n on location)

*vacuum-home* variable

Vacuum problem generator, and vacuum domain functions

vacuum-problem function (m n &optional dirt dirt-probability)

Create a Vacuum World problem.

vacuum-initial-state function (m n dirt dirt-probability)

vacuum-goalp function (state)

Is this a goal state?

vacuum-successors function (state)

Return a list of (action . state) pairs.

File search/agents/ps-agents.lisp

Problem-Solving Agents

problem-solving-agent type (algorithm)

An agent that applies a search algorithm to a problem to find a solution, and then follows the steps of the solution, one at a time, and then stops.

Game-Playing Agents

game-agent type (algorithm)

An agent that plays n-player games. The ALGORITHM slot is filled by a function that takes state and game arguments, and returns a move.

random-game-agent type nil

A game-playing agent that picks randomly from the legal moves.

human-game-agent type nil

A game-playing agent that asks the human user what to do.

pick-random-move function (state game)

ask-game-user function (state game)


File search/agents/ttt-agent.lisp

An Agent for Playing Tic-Tac-Toe

alpha-beta-ttt-agent type nil

A game-playing agent that uses ttt-eval to do alpha-beta search.

ttt-eval function (state)

Evaluate a TTT board on a scale from -1 to +1.

AIMA Home Authors Lisp Code AI Programming Instructors Pages