**inductive-learning.lisp****learning-curves.lisp****dtl.lisp****dll.lisp****nn.lisp****perceptron.lisp****multilayer.lisp****q-iteration.lisp**

**restaurant-multivalued.lisp****restaurant-real.lisp****restaurant-boolean.lisp****majority-boolean.lisp****ex-19-4-boolean.lisp****and-boolean.lisp****xor-boolean.lisp****4x3-passive-mdp.lisp**

**passive-lms-learner.lisp****passive-adp-learner.lisp****passive-td-learner.lisp****active-adp-learner.lisp****active-qi-learner.lisp****exploring-adp-learner.lisp****exploring-tdq-learner.lisp**

**learning-problem** *type* (examples
attributes
goals)

**attribute-name** *function* (attribute)

**attribute-values** *function* (attribute)

**attribute-value** *function* (attribute
example)

**random-examples** *function* (n
attributes)

**classify** *function* (unclassified-examples
goals
h
performance-element)

**consistent** *function* (examples
goals
h
performance-element)

* Coded examples have goal values (in a single list)*
* followed by attribute values, both in fixed order*

**code-examples** *function* (examples
attributes
goals)

**code-example** *function* (example
attributes
goals)

**code-unclassified-example** *function* (example
attributes
goals)

**print-learning-problem** *function* (problem
&optional
stream
depth)

**learning-curve** *function* (induction-algorithm
* examples -> hypothesis*
performance-element
* hypothesis + example -> prediction*
examples
attributes
goals
trials
training-size-increment
&optional
error-fn)

* this version uses incremental data sets rather than a new batch each time*

**incremental-learning-curve** *function* (induction-algorithm
* examples -> hypothesis*
performance-element
* hypothesis + example -> prediction*
examples
attributes
goals
trials
training-size-increment
&optional
error-fn)

**accuracy** *function* (h
performance-element
test-set
goals
&optional
error-fn)

**decision-tree-learning** *function* (problem)

**dtl** *function* (examples
attributes
goal
&optional
prior)

**distribution** *function* (examples
goal)

**majority** *function* (examples
goal)

**select-attribute** *function* (examples
attributes
goal)

**information-value** *function* (a
examples
goal)

**bits-required** *function* (d)

* dtpredict is the standard "performance element" that *
* interfaces with the example-generation and learning-curve functions*

**dtpredict** *function* (dt
example)

**decision-list-learning** *function* (k
problem)

**dll** *function* (k
examples
attributes
goal)

* select-test finds a test of size at most k that picks out a set of*
* examples with uniform classification. Returns test and subset.*

**select-test** *function* (k
examples
attributes
goal)

**select-k-test** *function* (k
examples
attributes
goal
test-attributes)

**generate-terms** *function* (attributes)

**uniform-classification** *function* (examples
goal)

**passes** *function* (example
test)

* dlpredict is the standard "performance element" that *
* interfaces with the example-generation and learning-curve functions*

**dlpredict** *function* (dl
example)

**unit** *type* (parents
* sequence of indices of units in previous layer*
children
* sequence of indices of units in subsequent layer*
weights
* weights on links from parents*
g
* activation function*
dg
* activation gradient function g' (if it exists)*
a
* activation level*
in
* total weighted input*
gradient
* g'(in_i)*
)

* make-connected-nn returns a multi-layer network with layers given by sizes*

**make-connected-nn** *function* (sizes
&optional
previous
g
dg)

**step-function** *function* (threshold
x)

**sign-function** *function* (threshold
x)

**sigmoid** *function* (x)

* nn-learning establishes the basic epoch struture for updating,*
* Calls the desired updating mechanism to improve network until *
* either all correct or runs out of epochs*

**nn-learning** *function* (problem
network
learning-method
&key
tolerance
limit)

**nn-error** *function* (examples
network)

**network-output** *function* (inputs
network)

* nn-output is the standard "performance element" for neural networks*
* and interfaces to example-generating and learning-curve functions.*
* Since performance elements are required to take only two arguments*
* (hypothesis and example), nn-output is used in an appropriate*
* lambda-expression*

**nn-output** *function* (network
unclassified-example
attributes
goals)

* unit-output computes the output of a unit given a set of inputs *
* it always adds a bias input of -1 as the zeroth input*

**unit-output** *function* (inputs
unit)

**get-unit-inputs** *function* (inputs
parents)

**random-weights** *function* (n
low
high)

* print-nn prints out the network relatively prettily*

**print-nn** *function* (network)

* perceptron learning - single-layer neural networks*
* make-perceptron returns a one-layer network with m units, n inputs each*

**make-perceptron** *function* (n
m
&optional
g)

**majority-perceptron** *function* (n
&optional
g)

* perceptron-learning is the standard "induction algorithm"*
* and interfaces to the learning-curve functions*

**perceptron-learning** *function* (problem)

* Perceptron updating - simple version without lower bound on delta*
* Hertz, Krogh, and Palmer, eq. 5.19 (p.97)*

**perceptron-update** *function* (perceptron
actual-inputs
predicted
target
&optional
learning-rate)

* back-propagation learning - multi-layer neural networks*
* backprop-learning is the standard "induction algorithm"*
* and interfaces to the learning-curve functions*

**backprop-learning** *function* (problem
&optional
hidden)

* Backprop updating - Hertz, Krogh, and Palmer, p.117*

**backprop-update** *function* (network
actual-inputs
predicted
target
&optional
learning-rate)

**backpropagate** *function* (rnetwork
* network in reverse order*
inputs
* the inputs to the network*
deltas
* the "errors" for current layer*
learning-rate)

**backprop-update-layer** *function* (layer
all-inputs
deltas
learning-rate)

* compute-deltas propagates the deltas back from layer i to layer j*
* pretty ugly, partly because weights Wji are stored only at layer i*

**compute-deltas** *function* (jlayer
ilayer
ideltas)

**q-entry** *function* (q
a
i)

**all-q-entries** *function* (i
q)

**q-actions** *function* (s
q)

* Given an MDP, determine the q-values of the states.*
* Q-iteration iterates on the Q-values instead of the U-values.*
* Basic equation is Q(a,i) <- R(i) + sum_j M(a,i,j) max_a' Q(a',j)*
* where Q(a',j) MUST be the old value not the new.*

**q-iteration** *function* (mdp
&optional
qold
&key
epsilon)

**average-successor-q** *function* (a
i
q
m)

* Compute optimal policy from Q table*

**q-optimal-policy** *function* (q)

* Choice functions select an action under specific circumstances*
* Pick a random action*

**q-random-choice** *function* (s
q)

* Pick the currently best action*

**q-dmax-choice** *function* (s
q)

* Pick the currently best action with tie-breaking*

**q-max-choice** *function* (s
q)

***restaurant-multivalued*** *variable*

***restaurant-multivalued-problem*** *variable*

***restaurant-real*** *variable*

***restaurant-real12-problem*** *variable*

***restaurant-real100-problem*** *variable*

***restaurant-boolean*** *variable*

***restaurant-boolean-problem*** *variable*

***majority-boolean*** *variable*

***majority-boolean-problem*** *variable*

***ex-19-4-boolean-problem*** *variable*

***and-boolean-problem*** *variable*

***xor-boolean-problem*** *variable*

***4x3-passive-m-data*** *variable*

***4x3-passive-r-data*** *variable*

***4x3-passive-mdp*** *variable*

**make-passive-lms-learner** *function* ()

**lms-update** *function* (u
e
percepts
n)

**make-passive-adp-learner** *function* ()

* Updating the transition model according to oberved transition i->j.*
* Fairly tedious because of initializing new transition records.*

**update-passive-model** *function* (j
* current state (destination of transition)*
percepts
* in reverse chronological order*
m
* transition model, indexed by state*
)

* (passive-policy M) makes a policy of no-ops for use in value determination*

**passive-policy** *function* (m)

***alpha*** *variable*

* initial learning rate parameter*

**make-passive-td-learner** *function* ()

**td-update** *function* (u
e
percepts
n)

**current-alpha** *function* (n)

**make-random-adp-learner** *function* (actions)

**make-maximizing-adp-learner** *function* (actions)

**make-active-adp-learner** *function* (actions
choice-function)

* Update current model to reflect the evidence from the most recent action*

**update-active-model** *function* (mdp
* current description of envt.*
percepts
* in reverse chronological order*
action
* last action taken*
)

**make-random-qi-learner** *function* (actions)

**make-maximizing-qi-learner** *function* (actions)

**make-active-qi-learner** *function* (actions
choice-function)

***r+*** *variable*

***ne*** *variable*

**exploration-function** *function* (u
n)

**make-exploring-adp-learner** *function* (actions)

* Given an environment model M, determine the values of states U.*
* Use value iteration, with initial values given by U itself.*
* Basic equation is U(i) <- r(i) + max_a f(sum_j M(a,i,j)U(j), N(a,i))*
* where f is the exploration function. Does not applyt to terminal states.*

**exploratory-value-iteration** *function* (mdp
&optional
uold
&key
epsilon)

**exploration-choice** *function* (s
u
m
r)

**make-exploring-tdq-learner** *function* (actions)

**update-exploratory-q** *function* (q
a
i
j
n
ri)

**exploration-q-choice** *function* (s
q
n)

AIMA Home | Authors | Lisp Code | AI Programming | Instructors Pages |