🧠 Introduction to AI
Neu-Ulm University of Applied Sciences
February 12, 2024
The problem-solving agents based on search algorithms know things, but only in a very limited, inflexible sense (Russel and Norvig 2022, 226):
Until now, the focus has been on agents that act rationally. Often, however, rational action requires rational (logical) thought on the agent’s part.
Relevant aspects of the environment must be represented in a knowledge base (KB), composed of sentences in knowledge representation language (e.g., logic).
Propositional logic (sometimes called zero-order logic) is a very simple logic systems1.
The example is taken from Russel and Norvig (2022)
The wumpus world is an environment in which knowledge-based agents can show their worth.
Let’s apply the PEAS concept to specify the task environment of the wumpus world (though we do not use the complete specification here).
. . .
Percepts are represented as a 4-tuple, e.g.,
\([Stench, Breeze, Glitter, None]\)
A typical wumpus world
Propositions are the building blocks of propositional logic.
Atomic sentences consist of a single proposition symbol, that can be true
or false
, e.g.,
The logical connectives and
\(\land\), or
\(\lor\), not
\(\neg\), implies
\(\implies\), and if and only if
\(\iff\) can be used to build complex sentences.
Operator precedence: \(\neg, \land, \lor, \implies, \iff\) (use brackets when necessary)
Atomic propositions can be true
or false
.
The truth of a formula follows from the truth of its atomic propositions (truth assignment or interpretation) and the connectives.
Examples:
If we focus on the immutable aspects of the wumpus world, we need the following symbols for each \([x,y]\) location
Following sentences2 reflect the \(KB\) for the situation depicted in Figure 5 where the agent has moved to \([2,1]\) and received the precepts below:
\([None,Breeze,None,None]\)
Our goal is to decide wether \(KB \models \alpha\)3 for some sentence \(\alpha\),
e.g. is \(\neg P_{1,2}\) entailed by our \(KB\) (and, thus, a safe place to move)?
Model checking enumerates all possible models to check that \(\alpha\) is true in every model in which \(KB\) is true4.
Example (the agent has just moved to \([2,1]\)):
Entailment by the \(KB\) (or logical consequence) can also be done by theorem proving.
Theorem proving means applying rules of inference directly to the sentences in the knowledge base to construct a proof of the desired sentence without consulting models.
For large number of models and short length of proofs, theorem proving can be more efficient than model checking.
Two sentences \(\alpha\) and \(\beta\) are logically equivalent if they are true in the same set of models6, i.e. \(\alpha \equiv \beta\)7
A sentence is valid if it is true in all models (e.g., \(P \lor \neg P\))8
For any sentences \(\alpha\) and \(\beta\), \(\alpha \models \beta\) if and only if the sentence \((\alpha \implies \beta)\) is valid.9
A sentence is satisfiable if it is true in, or satisfied by, some model10
Validity and satisfiability are connected:
\(\alpha \models \beta\) if and only if the sentence \((\alpha \land \neg \beta)\) is unsatisfiable11
Following inference rules12 can be used in any particular instances where they apply, generating sound inferences without the need for enumerating models:
All logical equivalences can be used as inference rules
Rules of the game:
The agent is in \([1,1]\) and has received following precepts:
Question: is there pit in\([1,2]\) or \([2,1]\)?
Using logical inference and the \(KB\) containing \(R_1\) to \(R_4\), we prove that \(\neg P_{1,2} \land \neg P_{2,1}\)
Inference: \(R_6 : (B_{1,1} \implies (P_{1,2} \lor P_{2,1})) \land ((P_{1,2} \lor P_{2,1}) \implies B_{1,1})\)
\(R_7\) can be inferred from \(R_6\) as follows:
Inference: \(R_7 : (P_{1,2} \lor P_{2,1}) \implies B_{1,1}\)
\(R_8\) can be inferred from \(R_7\) as follows:
Inference: \(R_8 : \neg B_{1,1} \implies \neg (P_{1,2} \lor P_{2,1})\)
\(R_9\) can be inferred from \(R_8\) and \(R_4\) as follows:
Inference: \(R_9 : \neg (P_{1,2} \lor P_{2,1})\)
The conclusion (\(R_{10}\)) can be inferred from \(R_9\) as follows:
Inference: \(R_{10} : \neg P_{1,2} \land \neg P_{2,1}\)
Neither \([1,2]\) nor \([2,1]\) contains a pit.
Search algorithms can be used to find a sequence of steps that constitutes a proof.
The proof problem is defined as follows:
Completeness (if the available inference rules are inadequate, the goal is not reachable)
Level of complexity (though propositional logic suffices to represent the wumpus world), e.g.,
Efficiency of inference algorithms
For further information on these and other matters, please see Russel and Norvig (2022)
Explain the logical operators to each other and complete following truth table.
\(\alpha\) | \(\beta\) | \(\neg \alpha\) | \(\alpha \land \beta\) | \(\alpha \lor \beta\) | \(\alpha \implies \beta\) | \(\alpha \iff \beta\) |
---|---|---|---|---|---|---|
false | false | |||||
false | true | |||||
true | false | |||||
true | true |
Check the following equivalences.
If you have issues of understanding, try to find a real life example.
Commutativity of \(\land\) and \(\lor\) | \(\alpha \land \beta \equiv \beta \land \alpha\) | \(\alpha \lor \beta \equiv \beta \lor \alpha\) |
Associativity of \(\land\) and \(\lor\) | \((\alpha \land \beta) \land \gamma \equiv \alpha \land (\beta \land \gamma)\) | \((\alpha \lor \beta) \lor \gamma \equiv \alpha \lor (\beta \lor \gamma)\) |
Double negation elimination | \(\neg(\neg\alpha) \equiv \alpha\) | |
Contraposition | \(\alpha \implies \beta \equiv \neg \beta \implies \neg \alpha\) | |
Implication elimination | \(\alpha \implies \beta \equiv \neg \alpha \lor \beta\) | |
Biconditional elimination | \(\alpha \iff \beta \equiv (\alpha \implies \beta) \land (\beta \implies \alpha)\) | |
De Morgan | \(\neg(\alpha \land \beta) \equiv \neg \alpha \lor \neg \beta\) | \(\neg(\alpha \lor \beta) \equiv \neg \alpha \land \neg \beta\) |
Distributivity of \(\land\) over \(\lor\) and vice versa | \(\alpha \land (\beta \lor \gamma) \equiv (\alpha \land \beta) \lor (\alpha \land \gamma)\) | \(\alpha \lor (\beta \land \gamma) \equiv (\alpha \lor \beta) \land (\alpha \lor \gamma)\) |
Explain why knowledge representation languages (KRL) are required.
Research real life AI applications that use (propositional) logic as KRL.
Search of other examples for KRL.
Suppose the agent has progressed to the point shown in Figure 4 (left part) having perceived nothing in \([1,1]\), a breeze in \([2,1]\), and a stench in \([1,2]\), and is now concerned with the contents of \([1,3]\), \([2,2]\), and \([3,1]\). Each of these can contain a pit, and at most one can contain a wumpus. Construct the set of possible worlds (you should find 32 of them). Mark the worlds in which the KB is true and those in which each of the following sentences is true:
Hence show that \(KB \models \alpha_2\) and \(KB \models \alpha_3\)
You can use a table like this:
Model | \(KB\) | \(\alpha_2\) | \(\alpha_3\) |
---|---|---|---|
\(P_{1,3}\) | true | ||
\(P_{1,3}, P_{3,1}, P_{2,2}\) | |||
… |
Propositions not listed as true on a given line are assumed false
Prove, or find a counterexample to, each of the following assertions, where \(\alpha\), \(\beta\), and \(\gamma\) represent any logical sentence:
Also try to find a real life example
Remember, \(\alpha \models \beta\) means that iff in every model in which \(\alpha\) is true, \(\beta\) is also true.
For each English sentence, there is a corresponding logical sentence. Work out which English sentence corresponds to which propositional logic sentence, and hence determine the meaning (in English) of each proposition symbol.
English
Propositional Logic
We label each sentence \(R_i\) so that we can refer to them
\(KB \models \alpha\) means that \(KB\) does entail \(\alpha\) (a sentence \(\alpha\) follows logically from \(KB\))
In mathematical notation, we write \(M(KB) \subseteq M(\alpha)\)
7 symbols (\(B_{1,1}\), \(B_{2,1}\), \(P_{1,1}\), \(P_{1,2}\), \(P_{2,1}\), \(P_{2,2}\), \(P_{3,1}\)), two statuses each (true, false); models are assignments of true or false to every proposition symbol
An alternative definition of equivalence is as follows: any two sentences \(\alpha\) and \(\beta\) are equivalent if and only if each of them entails the other: \(\alpha \equiv \beta\) if and only if \(\alpha \models \beta\) and \(\beta \models \alpha\)
Note that \(\equiv\) is used to make claims about sentences, while \(\iff\) is used as part of a sentence
Valid sentences are also known as tautologies, they are necessarily true
The deduction theorem states that every valid implication sentence describes a legitimate inference
The knowledge database given earlier (\(R_1 \land R_2 \land R_3 \land R_4 \land R_5\)) is satisfiable because there are three models in which it is true.
\(\alpha\) is valid iff \(\neg \alpha\) is unsatisfiable; contrapositively, \(\alpha\) is satisfiable iff \(\neg \alpha\) is not valid.
One or two statements at the upper half of the inference rule imply the statement on the lower half
Modus ponens is latin and stands for mode that affirms
Specifies the structure of sentences
Defines the truth of each sentence in each possible world or model
Search algorithms can be used to find solutions
Propositional logic lacks the expressive power to deal concisely with time, space, and universal patterns of relationships among objects.