Inference
- When looking at proving equivalences, we were showing that expressions in the form \(p\leftrightarrow q\) were tautologies and writing \(p\equiv q\).
- But we don't always want to prove \(\leftrightarrow\). Often we only need one direction.
- In general, mathematical proofs are “show that \(p\) is true” and can use anything we know is true to do it.
- Basically, we want to know that \(\mbox{[everything we know is true]}\rightarrow p\) is a tautology.
- … then we know \(p\) is true.
- We can use the equivalences we have for this.
- Since they are tautologies \(p\leftrightarrow q\), we know that \(p\rightarrow q\).
- But we can also look for tautologies of the form \(p\rightarrow q\).
- Here's a tautology that would be very useful for proving things: \[((p\rightarrow q) \wedge p) \rightarrow q\,.\]
- For example, if we know that “if you are in this course, then you are a DDP student” and “you are in this course”, then we can conclude “You are a DDP student.”
Rules of Inference
- If we have an implication tautology that we'd like to use to prove a conclusion, we can write the rule like this:
\(p\rightarrow q\) \(p\) \(\therefore\) \(q\) - This corresponds to the tautology \(((p\rightarrow q) \wedge p) \rightarrow q\).
- The \(\therefore\) symbol is therefore.
- The first two lines are premises.
- The last is the conclusion.
- This inference rule is called modus ponens (or the law of detachment).
- Here are the rules of inference that we can use to build arguments:
Name Rule Modus ponens \(p\) \(p\rightarrow q\) \(\therefore\) \(q\) Modus tollens \(\neg q\) \(p\rightarrow q\) \(\therefore\) \(\neg p\) Hypothetical syllogism \(p\rightarrow q\) \(q\rightarrow r\) \(\therefore\) \(p\rightarrow r\) Disjunctive syllogism \(p\vee q\) \(\neg p\) \(\therefore\) \(q\) Addition \(p\) \(\therefore\) \(p\vee q\) Simplification \(p\wedge q\) \(\therefore\) \(p\) Conjunction \(p\) \(q\) \(\therefore\) \(p\wedge q\) Resolution \(p\vee q\) \(\neg p \vee r\) \(\therefore\) \(q\vee r\) - Using these rules by themselves, we can do some very boring (but correct) proofs.
- e.g. “If I am sick, there will be no lecture today;” “either there will be a lecture today, or all the students will be happy;” “the students are not happy.”
- Translate into logic as: \(s\rightarrow \neg l\), \(l\vee h\), \(\neg h\).
- Then we can reach a conclusion as follows:
- \(l\vee h\) [hypothesis]
- \(\neg h\) [hypothesis]
- \(l\) [disjunctive syllogism using (1) and (2)]
- \(s\rightarrow \neg l\) [hypothesis]
- \(\neg s\) [modus tollens using (3) and (4)]
- So, I am not sick.
- Notice a similar proof style to equivalences: one piece of logic per line, with the reason stated clearly.
Inference and Quantified Statements
- Rules of inference start to be more useful when applied to quantified statements.
- Rules for quantified statements:
Name Rule Universal instantiation \(\forall x P(x)\) \(\therefore\) \(P(c)\) Universal generalization \(P(c)\) [for an arbitrary \(c\)] \(\therefore\) \(\forall x P(x)\) Existential instantiation \(\exists x P(x)\) \(\therefore\) \(P(c)\) [for some particular \(c\)] Existential generalization \(P(c)\) [for some particular \(c\)] \(\therefore\) \(\exists x P(x)\) - Now we can prove things that are maybe less obvious.
- e.g. “Students who pass the course either do the homework or attend lecture;” “Bob did not attend every lecture;” “Bob passed the course.”
- Translate into logic as (with domain being students in the course): \(\forall x (P(x) \rightarrow H(x)\vee L(x))\), \(\neg L(b)\), \(P(b)\).
- Then we can conclude:
- \(\forall x (P(x) \rightarrow H(x)\vee L(x))\) [hypothesis]
- \(P(b) \rightarrow H(b)\vee L(b)\) [Universal instantiation]
- \(P(b)\) [hypothesis]
- \(H(b)\vee L(b)\) [modus ponens using (2) and (3)]
- \(\neg L(b)\) [hypothesis]
- \(H(b)\) [Disjunctive syllogism using (4) and (5)]
- So, Bob must have done the homework.
- e.g. “Bob failed the course, but attended every lecture;” “everyone who did the homework every week passed the course;” “if a student passed the course, then they did some of the homework.” We want to conclude that not every student submitted every homework assignment.
- Translate into logic as (domain for \(s\) being students in the course and \(w\) being weeks of the semester): \[ \neg P(b)\wedge \forall w(L(b, w)) \,,\\ \forall s[(\forall w H(s,w)) \rightarrow P(s)] \,,\\ \forall s[P(s)\rightarrow\exists w H(s,w)] \,. \]
- Then we can conclude:
- \(\neg P(b)\wedge \forall w(L(b, w))\) [hypothesis]
- \(\neg P(b)\) [simplification using (1)]
- \(\forall s[(\forall w H(s,w)) \rightarrow P(s)]\) [hypothesis]
- \((\forall w H(b,w)) \rightarrow P(b)\) [universal instantiation using (3)]
- \(\neg\forall w H(b,w)\) [modus tollens using (4) and (2)]
- \(\exists w \neg H(b,w)\) [quantifier negation using (5)]
- \(\exists s \exists w \neg H(s,w)\) [existential generalization using (6)]
- So, somebody didn't hand in one of the homeworks.
- We didn't use one of the hypotheses. That's okay.
- In the last line, could we have concluded that \(\forall s \exists w \neg H(s,w)\) using universal generalization?
- i.e. every student missed at least one homework.
- Hopefully not: there's no evidence in the hypotheses of it (intuitively).
- The problem is that \(b\) isn't just anybody in line 1 (or therefore 2, 5, 6, or 7). It's Bob.
- It's not an “arbitrary” value, so we can't apply universal generalization.