Notes on Chapter 7: Logical Agents and Propositional Satisfiability

Logical Agents

logical agents use logic to make decisions and act in the world

the two most common kinds of logics are:

  • propositional logic, e.g. logic using propositions and logical connectives such as “and”, “or”, “not”, and “implies”
  • first-order quantified logic, e.g. logic with quantifiers like “for all” and “there exists”

there are (many!) other kinds of logic, but those are beyond the scope of this course

  • we will only consider 2-value logic, i.e. logic where expressions can be either true or false (nothing in-between)
  • there are logics with more than 2-values, e.g. 3-valued logic allows expressions to be “true”, “false”, or “unknown”; this could be useful in databases, where missing information could be treated as “unknown”

Syntax and Semantics

when thinking about logic, here are two key ideas you should consider:

  • what is the syntax of the logic? in other words, what sentences are well-formed, and what sentences are not well-formed; for instance, in Python the expression (p and q) or r is well-formed, but the expression or p ) and q ( is not well-formed
    • while not trivial, syntax is usually easier to deal with than semantics since there are standard ways of specifying syntax, e.g. figure 7.7 of the textbook gives a standard BNF description of the syntax of propositional logic
    • syntax is an important feature of natural language, e.g. most native English speakers would agree that the sentence “I am hungry” is syntactically well-formed, but the sentence “am hungry I” is not (it’s ungrammatical)
  • what are the semantics of the logic? in other words, what do logical expression mean?
    • in logic, the meaning of a sentence is taken to be when it is true and when it is false
    • in propositional logic that we discuss below, the semantics are defined by truth-tables; however, in most other logics the semantics is not so easy to specify

logic typically aims for both precise syntax and semantics; in practice, grammars are often used to specify syntax, and semantics are expressed in a variety of other ways

Logical Entailment and Models

suppose \(\alpha\) and \(\beta\) are logical sentences

  • for concreteness, it’s fine to assume \(\alpha\) and \(\beta\) are propositional logic sentences, although the definitions in this section apply many different logics

the basic question we are usually interested is logical entailment, or entailment for short

the notation \(\alpha \vDash \beta\) is read \(\alpha\) entails \(\beta\), or \(\beta\) logically follows from \(\alpha\)

  • for example, suppose \(\alpha\) represents all your knowledge, and \(\beta\) represents “I am hungry”; answering the question “Am I hungry?” can be treated as determining if \(\alpha\) entails \(\beta\), i.e. if \(\alpha \vDash \beta\)

another important idea is that of a logical model

most logical sentences have some variables, e.g. \(p \rightarrow q\) has two variables, \(p\) and \(q\)

a model of a logical sentence \(\alpha\) is an assignment of values to all the variables of \(\alpha\) that make it true

  • for example, \(p=\textrm{false}, q=\textrm{false}\) is a model of \(p \rightarrow q\)

we define \(M(\alpha)\) to be the set of all models of \(\alpha\)

  • for example, \(M(p \rightarrow q) = \{ (p=\textrm{false}, q=\textrm{false}), (p=\textrm{false}, q=\textrm{true}), (p=\textrm{true}, q=\textrm{true})\}\)

if \(M(\alpha)=\{\}\), i.e. if \(\alpha\) had no models, then \(\alpha\) is called a contradiction

if \(M(\alpha)\) is as large as possible, i.e. if all assignments of values to the variables of \(\alpha\) make \(\alpha\) true, then \(\alpha\) is called a tautology

if \(M(\alpha)\) is non-empty, then we say that \(\alpha\) has a model, or that \(\alpha\) is satisfiable

if \(M(\alpha)\) is empty, i.e. if \(\alpha\) is a contradiction, then we say \(\alpha\) is unsatisfiable (or sometimes just unsat)

here’s a basic connection between entailment and models

\[\alpha \vDash \beta \;\;\textrm{if and only if}\;\; M(\alpha) \subseteq M(\beta)\]

this equivalence shows that entailment can be converted to model-checking, and vice-versa

it is possible to write a program that test for entailment by checking models, but the problem is that even for propositional logic (the simplest logic) the sets \(M(\alpha)\) and \(M(\beta)\) are exponentially large in the number of variables

  • so simple model-checking is only practical if you have a few variables
  • however, more sophisticated model-checking programs don’t necessarily calculate all of \(M(\alpha)\) and \(M(\beta)\), but instead use search to more quickly find models
    • such programs are called SAT solvers, and have been one of the most successful technologies in AI; we’ll discuss them later

Inference Procedures

model-checking is not the only way to check for entailment

  • indeed, in some logics models can be infinite, and so it is not even possible to calculate some \(M(\alpha)\) sets

an inference procedure is a program that finds, or checks, logical entailments

for example, rules of inference in propositional logic can be used to create an inference procedure

consider these two rules of inference:

  • and-elimination: given \(X \land Y\), you can infer \(X\)
  • modus ponens: given \(X\) and \(X \to Y\), you can infer \(Y\)

we can use these rules to, for instance, show that \(p \land q\) and \(p \to r\) entails \(r\):

  1. \(p \land q\) (given)
  2. \(p \to r\) (given)
  3. \(p\) (and-elimination applied to 1)
  4. \(r\) (modus ponens applied to 3 and 2)

this is an example of a proof and is essentially the style used in mathematics (although mathematicians usually make the proof more English-like to improve readability)

in general, an inference procedure can be thought of as a program \(P\) that proves entailments, e.g. given \(\alpha\), we would like \(P\) to conclude that \(\beta\) is true if, and only if, \(\alpha \vDash \beta\)

if \(\alpha \vDash \beta\), then we say inference procedure \(P\) is complete if it is able to infer \(\beta\) from \(\alpha\)

  • an incomplete inference procedure is not able to prove everything that can be proven, e.g. the inference procedure above using just and-elimination and modus ponens cannot, for instance, prove that \(\neg\neg p\) entails \(p\) because neither of the inference rules can match a sentence with a \(\neg\)

if \(P\) infers \(\beta\) from \(\alpha\), then we say \(P\) is sound, if \(\alpha \vDash \beta\) holds

  • an unsound inference procedure might sometimes claim that a sentence is true when in fact it is false

in general, we would like our logical systems to be sound, complete, and efficient

but it’s very hard to achieve all three!

a fundamental limitation on logical completeness is Godel’s Incompleteness theorem which essentially states this:

  • in any sound formal system powerful enough to do basic number theory, there are true sentences it cannot prove to be true

Knowledge Bases

when thinking about logical agents, we imagine that the agent has a knowledge base (KB for short) that contains logical sentences that describe the state of the world

you could think of a KB as a database designed for storing and retrieving logical sentences … but it can also do inferencing, i.e. a good KB should be able to combine facts it knows to infer consequences of them

there are two main KB operations: TELL and ASK

  • TELL puts a new fact into the KB
  • ASK queries the KB about what is known; this usually involves inference, where the KB can determine that facts are true (or likely to be true) even though they have not been explicitly told them
    • e.g. if a KB knows that a car is traveling at 50kmph, then from that it ought to be able to infer that the car is probably not being washed

coming up with efficient ways to represent KBs and perform these two operations is one of the major problems in AI