Skim Pages 2961: Problems, Problem Spaces, and Search
 There are some who believe the simple statement, "AI is
search"
 State space is sometimes explicit, like a matrix of board
positions, and sometimes implicit, like a set of rules or
productions
 The analytical approach to problem solving is generally the
same in every domain. In AI the usual procedure is
 define a state space
 identify initial states
 identify goal states
 specify operators that change states
 Note, this is roughly the same procedure as any requirements
analysis for any software project.
 Note, define a state space means subproblem decomposition,
also something done in every software project. See
How to Solve It by
G. Polya
 It's stated like this in AI because so often the problem
area is akin to a board game
 Control strategies break down into the three types of
searches
 depthfirst search: top to bottom, left to right, the most
easily implemented (recursive) algorithm
 breadthfirst search: left to right, top to bottom, visiting
all the children before visiting a grandchild
 heuristic search, sometimes called "best first": where
theres an evaluation function you can use to choose the next path
 Key quote from the chapter (pg. 53). "These two problems,
chess and newspaper story understanding, illustratte the
difference between problems for which a lot of knowledge is
important only to constrain the search for a solution and those
for which a lot of knowledge is required even to be able to
recognize a solution".
Skim Pages 6398: Heuristic Search Techniques
 The general messaage(s) are these:
 very hard problems will tend
to have very large search spaces.
 heuristics (general rules that USUALLY apply), can be used
to limit search
 some sort of evaluation function is always necessary
 key vocabulary: heuristics allow you to "prune the search tree"
 generateandtest: a bruteforce depth first search in its
simplest form
 hillclimbing: a variation on generate and test that
incorporates visualization (see the usual diagram)
key vocabulary: the problem of local minima / maxima
 backtracking is a simple universal strategy, that requires
the algorithms to maintain state information.
 simulated annealing: a variation on hillclimbing where
random guesses are introduced (sometimes caled stochastic
search)
 bestfirst search: much the same as above, but where the
evaluation function is much more reliable.
 agendadriven search: perhaps the most interesting topic in
this chapter, as it produces answers for evaluation by
reordering tasks. This is an oddity in this chapter
 problem reduction: another term for "pruning"
 constraint satisifaction: be aware AI uses a strange sense
of "constraint"  the classic example is the seating chart
problem.
 meansends analysis: not usually described in a chapter on
heuristic search. Based on human behavior (as described in Polya
and elsewhere)
key vocabulary: subproblem decomposition
Read: 105129: Knowledge Representation Issues
 the problemsolving power of search techniques is limited in
part because of their generality
 it is generally understood (in symbolic AI) that solving
complex problems depends on knowledge and mechanisms to
manipulate it
 the challenge is known as the (knowledge) representation
problem
 the discussion on knowledge level and symbol level, and
"representation mappings", is all about the relation between
symbols (syntax) and meaning
 one basic problem is translating informal natural language
statements into a formal notation
 dog(Spot) => Spot is a dog
 All X: dog(x) > hastail(x) =>
All dogs have tails OR Every dog has a tail
 note: this one fact, and one inference rule, is enough to
produce a NEW fact => hastail(Spot)
 the authors note this is akin to generalized computer
programming: finding concrete implementation of abstract
concepts
 the authors do not note that the obverse of representation
is interpretation  representing facts and relations is for the
sole purpose of supporting inference.
i.e. dog(Spot) is just ASCII symbols without an inference mechanism
to provide meaning
 the typical AI represention is composed of two types of
thing: concepts (usually nouns) and relations
 relations are sometimes represented as a slotandfiller
structure (also commonly, slotswithrolesandfillers), which
are also called attributevaluepairs
 vocabulary: frame system is a set of structures linked by
semantic relations
a semantic network is a set of concepts linked by semantic
relations
the latter is an older specialization of the former, mostly used in
early (associational) memory modeling systems
 sadly, there is no generally agreed upon set of relations
 the other key idea in knowledge representation is
abstraction/inheritance (and the special relation: ISA,
sometimes written AKO, and its inverse: instanceof)
 by combining these in straightforward ways, we can infer
that Spot is warmblooded without explicitly representing that
fact.
 procedural knowledge refers to programming that effects
actions (like robot arms) or in ifthenelse decision
making. This is an older term not very useful any more.
 Note: many of the issues in knowledge representation are
similar to data structure issues
 Note: representing time is difficult (hence, there is an entire
branch of logic devoted to it)
 Vocabulary: granularity, "what level of detail?" "what are
the primitives?"
Skim: 131169: Logic
 one fundamental issue with predicate logic is everything is
"truth valued"; which causes a difficult representational "fit"
for a large class of problems
 another issue is that theorem proving is both "generative"
and undecidable, where
 generative (also called forward reasoning) means starting
with axioms and theorems
(i.e. starting from first principles), and trying to generate a
new proposition that matches the goal
 unecidable means if the goal is a nontheorem, there's no
guarantee the procedure will halt
 note, however, that while the idea is conceptually
generative, the algorithms usually generate proofs by chaining
backward from the theorem to be proved to the axioms
 resolution theorem proving is conceptually the same, but
takes the approach of "contradicting the negation"
 note that unification is one of the steps in resolution
 one of the main reasons for the popularity of resolution,
unification, and PROLOG, is that the first two are relatively
easy to implement in the third
 note, as the authors say, "people do not think in
resolution".
Skim: 171193: Rules
 rulebased systems are often called "expert systems"
 these are typically applied to diagnostic domains
(eg. medicine) although the most commercially successful
configured systems (R1 by DEC).
 PROLOG is often used to implement rulebased systems
 one essential control method is the order in which the rules
are stored in the rule base
 the conceptual algorithm is the same as with logicbased
systems: begin with a goal statement (to be "proved") and look
for (chains of) assertions that prove it
 PROLOG provides a builtin search engine, but search control
is fixed (depthfirst with backtracking), and it is very
difficult to apply domain knowledge to constrain search
 a pure PROLOG system (using strictly Horn clauses) is
decidable, and implements "negation as failure"
 negation as failure implies a "closed world assumption"
(that every useful fact is stored in the rulebase)
 this assumption causes a difficult representational "fit"
for a large class of problems
 rulebased systems get more interesting when there is a
fcility for "partial matching" (as with the regular expressions
in ELIZA)
 expert systems evolved rule sets that included "meta rules"
(rules about rules) as a way to exert more control over
problemsolving and runtimes
 historical note: expert systems were extremely fashionable
(and fundable) in the mid80s. This led to the formation of a
miniindustry for "expert system shells" (systems for building
exert systems) which in turn led to the famous debunking paper:
"The expert system shell game".
 summary: expert systems are known to be "brittle"  they are
difficult to maintain and difficult to add onto  and they are
known for "ungraceful" failures (not producing answers, or
producing very bad answers).
Skim: 195229: Uncertainty
 nonmonotonic reasoning (also called "defeasible")
 the basic intuition views this as reasoning about "possible
worlds" where some facts are not indisputable and new facts can
change the state of the universe
 you can think of it as a "set" of logic or rulebased
systems, where every uncertainty is enumerated in one or another
of the possible worlds
 then problem solving reduces to computing solutions in ALL
the possible worlds to find the best one
 this is why nonmonotonic reasoning is criticized for its
"combinatorial explosion".
 special note: abductive reasoning is a new formalism that
relaxes the usual rules of deductions;
 eg. if A imlies B, and B
is true, then abduction says we can assume A is true, even
without direct evidence.
 critics call this "reasoning from a faulty premise" but
there is an abductive reasoning community out there.
Skim: 231248: Statistics
 statistical reasoning (sometimes called stochastic
reasoning) divides into two general areas
 probabilities associated with rules
 where, for example, low
grade fever and a runny nose indicate the common cold, but only
about 80% of the time.
 these systems depend on judgements of domain experts and
reasonably standard set theory and logic.
 fuzzy logic, where concepts or entities can have conditional
membership in a set
 fuzzy logic is intended to support reasoning on propositions
that have "degrees of truth".
 supporters claim this is a better model of reality
 critics observe that decisions based on fuzzy logic always
depend on threshhold values, which effectively reduces to truth
valued logic.
