Appendix A. Glossary
activation function
A function that transforms the output of a neuron in an artificial neural network, generally to render it capable of handling nonlinear transformations or to ensure its output value is clamped within some range (chapter 7).
acyclic
A graph with no cycles (chapter 4).
admissible heuristic
A heuristic for the A* search algorithm that never overestimates the cost to reach the goal (chapter 2).
artificial neural network
A simulation of a biological neural network using computational tools to solve problems not easily reduced into forms amenable to traditional algorithmic approaches. Note that the operation of an artificial neural network generally strays significantly from its biological counterpart (chapter 7).
auto-memoization
A version of memoization implemented at the language level, in which the results of function calls without side effects are stored for lookup upon further identical calls (chapter 1).
backpropagation
A technique used for training neural network weights according to a set of inputs with known correct outputs. Partial derivatives are used to calculate each weight’s “responsibility” for the error between actual results and expected results. These “deltas” are used to update the weights for future runs (chapter 7).