PART II. Introduction To Artificial Intelligence

PART II Introduction To Artificial Intelligence Introduction to Artificial Intelligence -67- 1. BACKGROUND 1.1 Artificial Intelligence (AI) AI text...
2 downloads 0 Views 612KB Size
PART II Introduction To Artificial Intelligence

Introduction to Artificial Intelligence -67-

1. BACKGROUND 1.1 Artificial Intelligence (AI) AI text books seldom give a definition of what AI is. Perhaps it is because it is a norm in many sciences like mathematics, biology or chemistry where no definition of the subject is ever attempted. One simply grasp the meaning of the subject as one reads more and more into it. In their book, Artificial Intelligence; McGraw-Hill 1991, Rich and Knight define AI as the study of how to make computers do things which (presently) humans do better! Charniak and McDermott, in their Introduction to Artificial Intelligence; Addison-Wesley 1985, describe the work of AI researchers as trying to make computers think. They define AI as the study of mental faculties through the use of computational models! It is sometimes believed that one can grasp the definition or the meaning of a subject if one studies the problem areas that the subject is intended to solve. In the following section we present, albeit not exhaustively, some typical AI problems.

1.1.1 Typical AI Problems 1.1.1.1 Game Playing:Many games like chess, tic-tac-toe, moraba-raba etc require a great deal of experience before one can master them. There are potentially infinite tactics that one may employ in any game situation. Thus, any game playing mastery requires intelligence. Early game playing programs were in chess, the first of which was by Samuel. Nowadays the list is very long.

1.1.1.2 Automatic Theorem Proving (ATP):In general, to supply a proof or a disproof, requires an intelligent choice of steps leading eventually to the result. Such a process is usually termed a deduction. Mathematical fields like Algebra, Logic and Group Theory were the ones which received much attention by the early AI researchers. The program, Logic Theorist, was one of the earliest AI programs by Newell and Simon, to be used in proving theorems in group theory.

1.1.1.3 Expert Systems:An Expert System is an AI program that has a knowledge of some specialised field of expertise. Introduction to Artificial Intelligence

© mofana mphaka

Page -67-

Introduction to Artificial Intelligence -68Such fields can include the following: Weather Forecasting, Natural Resources Prospecting, Medical Diagnosis, Financial Management and Forecasting etc. One of the early expert systems is MYCYN which does medical diagnosis of patients for bacterial infections. MYCYN has been so successful that it is now used in training doctors.

1.1.1.4 Natural Language Processing:Natural Language Processing could be described as the study of the syntactics, semantics and pragmatics of natural languages such as Sesotho, English etc in their written form. Naturally, the ultimate goal here will be to understand what has been written. What makes natural language processing hard, although fascinating, is that messages or sentences in a natural language tend to be contextual, placing a great deal of emphasis on assumed facts and knowledge of discourse. Further, there is a problem of ambiguity in written speech. Although syntactic rules of a natural language could easily be formalised, semantics and pragmatics rules can only be acquired through language use over long periods with influences from culture and social contexts.

1.1.1.5 Vision and Speech Recognition:In vision, cameras can be used as input sensors. At present, vision programs are beset with extensive computational task since to find a conceptual vision model a bottom up approach is normally followed. First, light intensity signals, called pixels, are grouped into lines. Then each three dimensional picture, on the scene, is assembled until the overall conceptual model of the entire scene is built. In speech, microphones will be used as input sensors and speakers used as output medium. Again speech recognition is computationally burdened since parts of speech have to be grouped together into a meaningful statement one at the time. In any clean speech there is always a problem of ambiguity. The reader may want to verify this by trying to read the following two statements: 1. It is hard to recognise a ship in a deep blue sea. 2. It is hard to wreck a nice ship in a deep blue sea. In both speech recognition and vision, unwanted inputs - usually referred to as noise, have to be taken care of, effectively increasing the complexity of the problem to be tackled.

1.1.1.6 Common Sense Reasoning:Early AI researchers might have thought that common sense reasoning, like many everyday mundane tasks, would be an easy AI problem to solve as it appeared to require no specialised knowledge as in expert systems. It turned out, however, that this area is the most difficult as it

Introduction to Artificial Intelligence

© mofana mphaka

Page -68-

Introduction to Artificial Intelligence -69requires an insurmountable amount of knowledge of facts and their relations to be represented. In many situations, common sense reasoning may require one to interrelate many fields like vision, speech and natural language processing knowledge techniques.

1.2 The AI Technique 1.2.1 Background An AI technique, being algorithmic like any software engineering problem solving technique, will require two things: Knowledge Representation (cf data structures) and Search Techniques and Control Strategies (cf program algorithm).

1.2.1.1 Knowledge Representation:Knowledge may differ from data in that knowledge is usually structured information - a large collection of data organised in such a way that it adequately describes the objects in the domain of discourse and their interrelations so that new knowledge, equally descriptive, could be inferred. AI research has demonstrated that intelligence requires knowledge. But knowledge is voluminous, hard to encode accurately and is constantly changing. Thus, any AI knowledge representation technique will try to capture generalisations in the domain to avoid data explosion (since knowledge is voluminous) and be such that reasoning and inferences can be carried out even with incomplete knowledge (since knowledge is hard to encode accurately). The technique should allow for addition of new knowledge in the domain (since knowledge is constantly changing).

1.2.1.2 Search Techniques and Control Strategies:In many AI text books there is no distinction made between a control strategy or a search technique. In our discourse, however, we attempt to make such a distinction. We envisage a control strategy as an overall algorithmic technique that controls the manner in which a search technique can be carried out. Thus, we include in this category strategies like the following: a. Backtracking and Unification:We discussed backtracking and unification in PART I. We showed that these can be used in any search technique. In particular we showed that the Resolution Principle (an inference mechanism) employs unification. We also discussed backtracking when applied to the depth first search strategy.

Introduction to Artificial Intelligence

© mofana mphaka

Page -69-

Introduction to Artificial Intelligence -70b. Forward Reasoning:In this style of reasoning, given an initial state (in the problem space) and a (desired) goal state, we begin at the initial state and apply all the legal rules or actions generating more states until one of the current states (being generated) is the goal state. Both forward reasoning and backward chaining are extensively used in production systems. c. Backward Chaining:In backward chaining, given a (desired) goal state, we explore the solution space backwards, following all the legal solution paths, until we find the initial state(s) that might have produced our desired goal (which will be our current state). We describe a search strategy or technique as a method of searching a problem space in order to find a solution employing at least one of the control strategies mentioned above. A search strategy can be very general or problem specific. General search strategies include the depth first and breadth first search strategies.

1.3 Exercises 1. Find a specific problem which could be characterised as an AI problem. 2. Is writing a program that aids students in learning basic geometric concepts an AI application? If it is explain why. If not, again, explain why not. 3. Compare the following problems: a) A program to convert documents from one word processor to another. b) A program to convert documents from one natural language to another. c) A program to convert documents from one programming language to another. Order these problems in their increasing complexity and select the ones which are AI candidates. 4. Discuss points into whether or not a program doing a lecture time table could be classified as an AI problem.

Introduction to Artificial Intelligence

© mofana mphaka

Page -70-

Introduction to Artificial Intelligence -71-

2. KNOWLEDGE TECHNIQUES

REPRESENTATION

2.1 Predicate Calculus 2.1.1 Background Since predicate calculus is formally understood, if information could be encoded in the calculus then deductions could be made as, simply, proofs in the domain of discourse. The Robinson’s Resolution Principle has demonstrated the practically of the predicate calculus culminating in the engineering of logic programming languages such as ProLog. The adequacy and efficiency of predicate calculus representation of information depends entirely on the individual doing the encoding - the granularity of the predicates and what it is to be proven. The biggest problem in predicate calculus representation, however, is its limitation in the kind of information that can be represented in the calculus.

Example II, 2.1: Given the following narration, we want to find whether Marcus was not loyal to Caesar. 1. Marcus was a man 2. Marcus was a Pompian 3. All Pompians were Romans 4. Caesar was a ruler 5. All Romans were either loyal to Caesar or hated him. 6. Everyone is loyal to someone. 7. People only try to assassinate rulers they are not loyal to. 8. Marcus tried to assassinate Caesar

Solution: There are 2 things that we need to do before we can apply resolution: 1) Try to represent the given statements (as well as our purported theorem) in predicate calculus as adequately as possible; 2) Add the negation of our hypothesis to the CNF of all the premises. So, in predicate calculus we may have the following representation: 1. man(marcus). 2. pompian(marcus). 3. œX (pompian(X) 6 roman(X)). Introduction to Artificial Intelligence

© mofana mphaka

Page -71-

Introduction to Artificial Intelligence -724. ruler(caesar). 5. œX (roman(X) 6 loyal_to(X, caesar) w hate(X, caesar)). 6. œX (man(X) 6 ›Y loyal_to(X, Y)) 7. œX œY (man(X) v ruler(Y) v try_to_assassinate(X, Y) 6 ¬loyal_to(X, Y)). 8. try_to_assassinate(marcus, caesar) The hypothesis is: 9. ¬loyal_to(marcus, caesar). In CNF we have: 1. man(marcus). 2. pompian(marcus). 3. ¬pompian(X) w roman(X). 4. ruler(caesar). 5. ¬roman(Y) w loyal_to(Y, caesar) w hate(Y, caesar). 6. ¬man(Z) w loyal_to(Z, g(Z)) 7. ¬man(W) w ¬ruler(U) w ¬try_to_assassinate(W, U) w ¬loyal_to(W, U). 8. try_to_assassinate(marcus, caesar) 9. loyal_to(marcus, caesar). (negated hypothesis) We carry out Resolution as follows:

Introduction to Artificial Intelligence

© mofana mphaka

Page -72-

Introduction to Artificial Intelligence -73-

Example II, 2.2 Given the facts: 1. 2. 3. 4.

Students don’t like hard courses. Some courses are hard and others are not. All formal courses are hard. CS 341 (Theory of Computation) is a formal course.

Show by resolution refutation that no student will like CS 341.

Solution: We will first represent the facts in predicate calculus and then convert them to clause form in order to carry out the refutation process. So, 1. 2. 3. 4.

œX œY (student(X) v course(Y) v hard(Y) 6 ¬like(X, Y)). ›U ›V (course(U) v hard(U) v course(V) v ¬hard(V)) œW (course(W) v formal(W) 6 hard(W)) course(cs341) v formal(cs341).

In clause form these become: 1. ¬student(X) w ¬course(Y) w ¬hard(Y) w ¬like(X, Y)). 2. 2.1 course(a) 2.2 hard(a) 2.3 course(b) 2.4 ¬hard(b) Where a, b are Skolem constants. 3. ¬course(W) w ¬formal(W) w hard(W)) 4. 4.1 course(cs341) 4.2 formal(cs341). Now, the hypothesis is œS (student(S) 6 ¬like(S, cs341)). Its negation is found as: ¬[œS (student(S) 6 ¬like(S, cs341))] / ›S ¬[(¬ student(S) w ¬like(S, cs341)] / ›S student(S) v like(S, cs341) In clause form we have: 5. 5.1 5.2

student(m) like(m, cs341)

Where m is a Skolem constant.

Introduction to Artificial Intelligence

© mofana mphaka

Page -73-

Introduction to Artificial Intelligence -74Refuting we get:

Example II, 2.3 Consider the following facts: 1. The members of the Elm St. Bridge Club are Joe, Sally, Bill, and Ellen. 2. Joe is married to Sally. 3. Bill is Ellen's brother. 4. The spouse of every married person in the club is also in the club. 5. The last meeting of the club was at Joe's house. a) Represent these facts in predicate logic. b) Try proving, by resolution refutation, that the last meeting of the club was at Sally’s house. If you fail add any necessary implicit information in the knowledge to help you.

Introduction to Artificial Intelligence

© mofana mphaka

Page -74-

Introduction to Artificial Intelligence -75-

Solution: a) 1 a). member(joe, elm) 1 b). member(sally, elm) 1 c). member(bill, elm) 1 d). member(ellen, elm) 2. married(joe, sally) 3. brother(bill, ellen) 4. œX, œY : spouse(X, Y) v member(Y, elm) 6 member(X, elm). 5. own(joe, house) v last-meeting(elm, house). b) In order to be able to make some meaningful mechanical inference from the given statements, we have to add some more rules (although obvious to humans) explicitly like: 6. A spouse is a married person. œX, œY: married(X, Y) 6 spouse(Y, X). 7. Every married couple shares the house they live in: 7 a). œX, œY: own(X, house) v married(X, Y) 6 own(Y, house). 7 b). œX, œY: own(X, house) v spouse(Y, X) 6 own(Y, house). Now we can put everything into clausal form in order to apply the resolution refutation process. So, we have: 1 a). member(joe, elm) 1 b). member(sally, elm) 1 c). member(bill, elm) 1 d). member(ellen, elm) 2. married(joe, sally) 3. brother(bill, ellen) 4. ¬spouse(X, Y) w ¬member(X, elm) w member(Y, elm) 5 a). own(joe, house). 5 b). last-meeting(elm, house). 6. ¬married(U, V) w spouse(V, U). 7 a). ¬own(S, house) w ¬married(S, W) w own(W, house) 7 b). ¬own(T, house) w ¬spouse(Z, T) w own(Z, house) Now we want to prove (by resolution refutation) that: The last meeting of the club was at Sally's house. That is, own(sally, house) v last-meeting(elm, house). So, we refute the negation of the hypothesis. That is, we refute Introduction to Artificial Intelligence

© mofana mphaka

Page -75-

Introduction to Artificial Intelligence -768. ¬own(sally, house) w ¬last-meeting(elm, house). So:

Example II, 2.4 Consider the following knowledge base (KB): 1. 2. 3. 4.

African men are either married or single. Anyone married will have children. If one is not married then he is single Lesooa is a single African.

a) i) Represent these statements in predicate logic, as adequately as possible. ii) Consequent to i) above or otherwise, convert the entire KB to clause form. b) Using a) above try to prove, by resolution refutation, that Lesooa has no child. If you fail, add just 2 more rules (which are implicit from the KB) and carry out resolution.

Solution: a) i) 1. 2. 3. 4.

œX (african(X) 6 married(X) w single(X) ) a) œY (married(Y) 6 has_a_child(Y)) a) œW (¬ married(W) 6 single(W)) african(lesooa) v single(lesooa)

ii) 1. ¬ african(X) w married(X) w single(X) 2. a) ¬ married(Y) w has_a_child(Y) Introduction to Artificial Intelligence

© mofana mphaka

Page -76-

Introduction to Artificial Intelligence -773. a) married(W) w single(W) 4. a) african(lesooa) b) single(lesooa) b) The hypothesis is ¬ has_a_child(lesooa) Its negation is 5. has_a_child(lesooa) We cannot proceed with any resolution since there is not enough explicit rules to discretely spell out all the implicit information following from rules 2 and 3. The information is as follows: 2. b) Anyone who has a child must be married (i.e. by the symmetry of rule 2). So, we have: œB (has_a_child(B) 6 married(B)) In clause form, we have ¬ has_a_child(B) w married(B) 3. b) One who is single means that he is not married (i.e. by the symmetry of rule 3). So, œA (single(A) 6 ¬ married(A)) In clause form, we have ¬ single(A) w ¬ married(A) So, resolving, we get

Introduction to Artificial Intelligence

© mofana mphaka

Page -77-

Introduction to Artificial Intelligence -78-

2.1.2 Question Answering Using Resolution We can use unification during resolution to give answers to variables which would be our questions. This means, for example, in our Example II, 2.1 we could have asked: Who was not loyal to Caesar? (i.e ¬loyal_to(X, caesar)). and we would want the answer to be X = marcus. The technique is to add the disjunction of the hypothesis to the negation of the hypothesis to be refuted, effectively making a tautology. However, the positive hypothesis disjunct is never used in any resolution. It is only allowed to be instantiated during the resolution of other clauses and literals. In the end, instead of the NIL clause, the positive hypothesis will be the one left, properly instantiated. We demonstrate this in the following example:

Example II, 2.5 Using the KB of Example II, 2.1, try to answer the question: Who was not loyal to Caesar? (i.e ¬loyal_to(X, caesar)).

Solution: The negated hypothesis is loyal_to(X, caesar) and the positive hypothesis is ¬loyal_to(X, caesar). So, we form the disjunction: loyal_to(X, caesar) w ¬loyal_to(X, caesar) The literal ¬loyal_to(X, caesar) will never be used in any resolution except for all ensuing instantiations. So,

Introduction to Artificial Intelligence

© mofana mphaka

Page -78-

Introduction to Artificial Intelligence -79-

Thus, the answer is X = marcus.

2.1.3 Some Inadequacies of Predicate Calculus Representation There are statements or situations which cannot be represented using predicate calculus. These include situations in the following calculi:

2.1.3.1 Modal Reasoning:It is impossible to represent a statement that suggest a mode, possibility or a necessity in predicate calculus. Typical statements could include the following: 1. 2. 3. 4. 5.

Mofana might find another job. It is possible that there are many other universes. It is often difficult to follow any politician’s logic. Sacrifice is a virtue rarely found among men. Unless we change direction soon, we are doomed to end where we are headed.

Introduction to Artificial Intelligence

© mofana mphaka

Page -79-

Introduction to Artificial Intelligence -80-

2.1.3.2 Temporal Reasoning:A temporal situation reasons about time and time interdependencies. Such a scenario is difficult to represent in predicate logic as it is shown in the following examples: 1. 2. 3. 4. 5.

Tomorrow will never come. SADC may be in Lesotho forever. Sometime in the past, Lesotho was a tranquil land. Every time insanity prevails over logic. It will soon become apparent that we have no political leadership.

In general, any situation or statement which cannot be reducible to either being true or false might be impossible to represent in predicate calculus. Some or most of these statements could be in the logics we have briefly discussed or more. We present another five more examples to cement the idea: 1. 2. 3. 4. 5.

I feel like crying for Lesotho, my beloved country. It is not surprising that some people are suspicious of SADC’s intentions in Lesotho. If wishes were horses, then beggars would ride. It is only fair that people are told what is happening. Too often, the only politician’s ambition is to rule and how is none of his business.

2.2 Production (or Rule) Systems 2.2.1 Background A production system has three basic components: a global data structure - sometimes referred to as the current working memory, a set of rules (productions) and a control algorithm.

2.2.1.1 Production Rules:The productions are of the form: IF THEN The is a list of premises that must be true in the current working memory before the can be taken or the inferred. The rules are the only means by which a production system can infer and hence add or delete knowledge in the current KB.

Introduction to Artificial Intelligence

© mofana mphaka

Page -80-

Introduction to Artificial Intelligence -81-

2.2.1.2 The Current Working Memory (CWM):The CWM is a global data structure containing the data against which the part of the rules can be matched. The or part of the rules will either add or delete some memory elements in the CWM.

2.2.1.3 The Control Algorithm:The algorithm used by a production system to infer knowledge is that of a recognise-act cycle as follows: Given a goal, G, to prove 1. Check if G has been reached in the CWM (i.e. is G 0 CWM?). 2. If G is one of the memory elements then STOP (if G 0 CWM then G has been proven). 3. If G is not in the CWM then (it could be deducible from rule application) do: 3.1 If forward reasoning technique is being used, do: 3.1.1 Select rules whose parts match the contents of the CWM. 3.1.2 Use the implemented conflict resolution technique to select which rules to fire and fire them. 3.2 If backward chaining technique is being used, do: 3.2.1 Select rules whose or parts infer the current (sub) goal, G. 3.2.2 Use the implemented conflict resolution technique to select which rules to pick and prove their parts as new (sub) goals. 3.3

Go to 1.

4. If G is a conjunction of goals, G1 and G2, then G would have been proven if 4.1 G1 has been proven AND 4.2 G2 has also been proven 5. If G is a disjunction of goals, G1 or G2, then G would have been proven if, EITHER 5.1 G1 has been proven OR 5.2 G2 has been proven A conflict resolution technique is necessary since rules may contradict each other in the sense that some rules could undo what other rules could have done. Again, in a forward reasoning strategy, some rules could be adding no new elements or deleting no elements in the CWM. In a backward chaining strategy, some rules could be having their parts already proven before.

Introduction to Artificial Intelligence

© mofana mphaka

Page -81-

Introduction to Artificial Intelligence -82-

2.2.2 Using Rules In Knowledge Representation Rules are extensively used in Expert Systems as a way to encode knowledge. These rules are later used by the inference algorithm to infer new knowledge. A typical rule set for a mechanic expert system for a car that won’t start may include the following rules: 1. IF the battery is flat THEN the car won’t start with certainty 80%. 2. IF the battery is ok AND the start motor is working, but carburator clogged THEN the car won’t start with certainty 60%. Rules are also used in Reactive Systems where they are used to trigger the reaction of the system to the surrounding environment. A typical reactive system could be the one that keeps temperature within certain limits. Such a system may have a rule set which may include the following rules: 1. IF temperature falls below the required level THEN switch on the heater. 2. IF temperature rises above the required level THEN switch off the heater.

Example II, 2.6: Suppose the following are facts in the CWM of a production system: CWM = {A, B, C, E, G, H} and that the following are the rules in the KB: 1. If F and B then Z 2. if C and D then F 3. If A then D. and that the goal is to prove Z. a) Show how a production system using a forward chaining technique might carry out the proof process. b) Again, show how a backward chainer might carry out its proof process.

Solution: a) A forward chainer might proceed as follows: 0. The goal is not a member of the CMM. So, it must be deducible from rule application. 1. Fire all the rules whose premises are satisfied by the CWM. Thus, rule 3 will fire to Introduction to Artificial Intelligence

© mofana mphaka

Page -82-

Introduction to Artificial Intelligence -83produce D. Now, the CWM will become: A, B, C, D, E, G, H 2. Once again, the goal is still not yet a member of the CWM. So, we should still fire rules. 3. Now, rule 2 and 3 can fire since both of them have their premises true in the CWM. However, rule 2 will be the only one to fire, using the heuristics that it introduces new elements to the CWM. This transforms the CWM to: A, B, C, D, E, F, G, H 4. Once more, the goal is still not yet a member of the CWM. So, we should still fire rules. 5. Now, all the three rules can fire. But, using the heuristics as in 3. above, only rule 1 will fire and change the contents of the CWM to: A, B, C, D, E, F, G, H, Z. 6. Now, Z is in the CWM. Therefore, the procedure terminates with success. b) A backward chainer might proceed as: 0. The goal is not a member of the CMM. So, it must be deducible from rule application. 1. Pick a rule which immediately establishes the conclusion of the current goal Z. Such a rule is rule 1. Now, the sub-task is to establish if the premises (F and B) is true in the CWM. This becomes a sub-goal which is a conjunct of goals F, B. Thus, both F and B would have to be proved in order to conclude F and B true. 2. Picking the first sub-goal F, using the heuristic of picking sub-goals in their linear textual ordering, rule 2 will be found applicable (since it concludes the current goal F). This introduces two further sub-goals: C and D (as in 1. above). 3. Sub-goal C is picked first by applying the same heuristics as in 2. above. Now C is true in the CWM. Therefore, C has been proved. 4. Then sub-goal D is attempted. It is exited with success since it is also true in the CWM. 5. Thus, the level 2. attempt to prove C and D is exited with success. Therefore, the sub-goal F has been proved. 6. Finally, the sub-goal B is attempted. This succeeds immediately since it is the CWM element. Introduction to Artificial Intelligence

© mofana mphaka

Page -83-

Introduction to Artificial Intelligence -84Thus, the level 1. attempt to prove F and B is exited with success. 7. Therefore, Z has been proved!

Example II, 2.7 Consider the knowledge base (KB): 1. 2. 3. 4. 5.

Students don’t like hard courses. All formal courses are hard. CS3410 (Theory of Computation) is a formal course. Likeleli is a student Some students like any kind of course.

Prove that Likeleli dislikes CS3410 using: a) Backward Reasoning Control Strategy. b) Forward Reasoning Control Strategy.

Solution: There are 3 rules in the KB, namely: Rule 1: œX œY (student(X) v course(Y) v hard(Y) 6 ¬likes(X, Y)) Rule 2: œC (course(C) v formal(C) 6 hard(C)) Rule 3: ›S (student(S) v œW (course(W) 6 likes(S, W))) Rule 3 can be further be written as Rule 3 a): œW (course(W) 6 likes(S1, W)), S1 a Skolem constant Fact 3 b): student(S2), S2 a Skolem constant Facts: 1. a) Course(cs3410) b) formal(cs3410) 2. Student(likeleli) a) 0. The goal to prove is ¬likes(likeli, cs3410). Since this is not a fact in the current KB, it must be deducible (if it is) from rule application in the KB. So, 1. There is only one rule, Rule 1, that concludes this goal. The unification being {likeleli/X, cs3410/Y}. The premises of the rule become the new (sub)goals. They are: Introduction to Artificial Intelligence

© mofana mphaka

Page -84-

Introduction to Artificial Intelligence -851.1 1.2 1.3

student(likeleli) course(cs3410) hard(cs3410)

2. The goal of 1.1, student(likeleli), succeeds since it is a fact in the KB, fact 2. 3. The next goal of 1.2, course(cs3410), also succeeds since it is a fact in the KB, fact 1 a). 4. The last premises, goal of 1.3, hard(cs3410), can only be inferred from Rule 2, unification being {cs3410/C}. The resulting new (sub)goals, which are the rule premises are: 4.1 4.2

course(cs3410) formal(cs3410)

5. The goal of 4.1, course(cs3410), is satisfiable (c.f Step 3 above). 6. The goal of 4.2, formal(cs3410), succeeds since it is a fact in the KB, fact 1 b). Thus, the goal of 1.3, hard(cs3410), has been proven. 7. Therefore, the initial goal, ¬likes(likeli, cs3410), has been proven, since all its premises hold. So, Likeleli doesn’t like CS3410, by backward reasoning. b) 0. Once again, the given goal is not a fact in the current KB. So, it must be derivable from the KB rules. 1. There is only one rule, Rule 2, that can fire given the current KB with unification {cs3410/C}. When Rule 2 fires, it adds the new fact. 1.0

hard(cs3410)

2. The goal is not yet one of the facts in the KB. So, rules which can fire must be applied. Currently, all the rules can be fired. However, using the heuristics that fires rules which add new facts in the KB, then only Rule 1 and Rule 3 will be fired. 2.0

Firing Rule 1, unification being {likeleli/X, cs3410/Y}, we add:

2.0.0 ¬likes(likeli, cs3410) 2.1

Firing Rule 3, unification being {likeleli/S, cs3410/W}, we add:

Introduction to Artificial Intelligence

© mofana mphaka

Page -85-

Introduction to Artificial Intelligence -862.1.0 likes(likeleli, cs3410) 3. Now our goal, by 2.0.0, is now one of the generated facts in the KB. So, we stop. Once more, Likeleli doesn’t like CS3410 (by forward chaining).

Example II, 2.8 Consider the KB: 1. 2. 3. 4. 5. 6.

Basotho are a people with a culture that men are circumcised. There are Basotho men who are not circumcised. Basotho are Christians by religion. Some Basotho adopt other religions other than Christianity. All religious people pray God. Some people don’t pray God.

Prove that Some Christians don’t pray God, using a) Backward Reasoning Strategy b) Forward Reasoning Strategy

Solution: First of all the KB organised as rules and facts looks like: Rules: 1. œX (person(X) v mosotho(X) v man(X) 6 circumcised(X)) 2. œZ (mosotho(Z) v religious(Z) 6 christian(Z)) 3. œM (person(M) v religious(M) 6 prays(M, god)) Facts: 1 a) mosotho(C1) b) man(C2) c) ¬ circumcised(C3) 2 a) mosotho(C4) b) religious(C5) c) ¬ christian(C6) Introduction to Artificial Intelligence

© mofana mphaka

Page -86-

Introduction to Artificial Intelligence -873 a) person(C7) b) ¬ prays(C8, god),

C1, C2, C3, ..., C8 are Skolem constants.

So, suppose our search strategy will be depth first with backtracking for both rule application and goal pursuance. Then, our goal is ›D (christian(D) v ¬ prays(D, god)) This is a conjunction of goals b) Backward Reasoning Strategy: 0. The goal is not a member of the CWM. 1. There is no rule that infers this goal. 2. However, the goal is a conjunction of goals: 2.1 2.2

christian(D) and ¬ prays(D, god)

Each of these goals will have to be proven, again in a Backward Reasoning manner 3. 3.1 3.2

Goal 2.1 is not a fact in the current KB. It could be deducible from rule application. The only rule that infers this goal is rule 2: mosotho(D) v religious(D) 6 christian(D) Unification {D/Z} The new (sub)goals to prove are then 3.2.1 mosotho(D) 3.2.2 religious(D)

3.3

Goal 3.2.1 succeeds since it is a fact in the current KB, fact 1 a). Unification {D/C1} So, goal 3.2.2 becomes religious(C1)

Introduction to Artificial Intelligence

© mofana mphaka

Page -87-

Introduction to Artificial Intelligence -883.4

Goal 3.2.2 also succeeds since it is also a fact in the current KB, fact 2 b). Unification {C1/C5}

3.5

Due to 3.4 and 3.3 above, the premises of the rule in Step 3.2 has been proven.

3.6

Due to 3.5 above, goal 2.1 in step 3 above has been proven

4. Goal 2.2, ¬ prays(C5, god), succeeds since it is just a fact in the KB, fact 3 b). Unification {C5/C8} 5. Due to 3 and 4 above, the conjunction of goals in Step 2 above has been proven. Thus, some Christians don’t pray God by Backward Reasoning. b) Forward Reasoning Strategy Suppose our conflict resolution strategy in firing applicable rules is that we fire only the rules that introduce new elements to the CWM. 0. The goal is not a member of the CWM. So, it might be deducible from rule application. So, 1. According to the CWM, all rules can fire Firing Rule 1, unification {C1/X, C1/C2, C1/C7}, we add: circumcised(C1) Firing Rule 2, unification {C4/X, C4/C5}, we add: christian(C4) Firing Rule 3, unification {C5/X, C5/C7}, we add: prays(C5, god) 2. Our goal (which is a conjunction of goals) is still not a member of the CWM. 3. No rules can be fired according to our strategy. 4. However, the goal is a conjunction of goals, namely: 4.1 4.2

christian(D) and ¬ prays(D, god)

Introduction to Artificial Intelligence

© mofana mphaka

Page -88-

Introduction to Artificial Intelligence -89Each of these goals will have to be proven, again in a Forward Reasoning manner 5. Goal 4.1 is now a fact in the current KB (it was added by rule application). Unification {C4/D} 6. Our next goal, Goal 4.2, ¬ prays(C4, god), is also a member of the CWM, fact 3 b). Unification {C4/C8} 7. Due to 5 and 6 above, the conjunction of goals in Step 4 above has been proven. Thus, some Christians don’t pray God by Forward Reasoning.

Example II, 2.9 Consider the same KB of Example II, 2.8 above. Prove that circumcised men pray God using a) Forward Reasoning Strategy b) Backward Reasoning Strategy

Solution: Our goal is œP (man(P) v circumcised(P) 6 prays(P, god)) / œP (¬ man(P) w ¬ circumcised(P) w prays(P, god)) Our conflict resolution strategy and the search strategy are the same as in Example II, 2.8 a) Forward Reasoning Strategy 0. The goal is not a member of the CWM. 1. So, it must be deducible from rule application. 2. According to the contents of the CWM, all rules can fire: 2.1

Firing Rule 1, unification {C1/X, C1/C2, C1/C7}, we add:

Introduction to Artificial Intelligence

© mofana mphaka

Page -89-

Introduction to Artificial Intelligence -90circumcised(C1) 2.2

Firing Rule 2, unification {C1/X, C1/C5}, we add: christian(C1)

2.3

Firing Rule 3, unification {C7/M, C7/C5}, we add: prays(C7, god)

3. Our goal is still not a member of the CWM. 4. Now, we attempt to fire all applicable rules again. However, according to our conflict resolution strategy, no rule can be fired. 5. Now, the goal pattern matches a disjunction of goals, which is: 5.1 5.2 5.3

¬ man(P) or ¬ circumcised(P) or prays(P, god)

The disjunction will be proved using the short circuit technique. 6. Goal 5.1 fails (since it is not a member of the CWM and no rule can be fired to achieve it and it cannot be broken down into a conjunction or a disjunction of goals – for further proving). 7. Goal 5.2 succeeds since it is a fact in the KB, fact 1 c), unification being {C3/P}. 8. Due to 7 above, the disjunction in Step 5 has been proven. Thus, circumcised men pray God by Forward Reasoning. b) Backward Reasoning Strategy 0. Once again, the goal is not a member of the CWM. So, it must be deducible from rule application. 1. There is no rule that infers this goal. 2. The goal is a disjunction as 2.1 2.2 2.3

¬ man(P) or ¬ circumcised(P) or prays(P, god)

The disjunction will, again, be proved using the short circuit technique. Introduction to Artificial Intelligence

© mofana mphaka

Page -90-

Introduction to Artificial Intelligence -913. Goal 2.1 fails (since it is not a member of the CWM and no rule infers it and it can never be broken down into more sub-goals for further proving). 4. Goal 2.2 succeeds since it is just a fact in the KB, fact 1 c), with unification {C3/P}. 5. Due to 4 above, the disjunction in Step 2 has now been proven. So, once again, circumcised men pray God by Backward Reasoning.

2.3 Semantic Nets 2.3.1 Background A semantic net is a labelled directed graph where the nodes represent objects, classes of objects and attribute or property values. The arcs are property (i.e predicate) labels, class instances or class inheritance. The arcs point to the attribute value nodes, the objects or the object classes. Conventionally, arcs indicating class membership are labelled “is a” links. Arcs indicating class instance are marked “instance”. In order to represent any statement in a semantic net one has to re-formulate the given sentence in an abstraction that reduces the relationship between or among the objects to a binary predicate.

Example II, 2.10 Represent the following statement in a semantic net: Mofana is an African man of height 1.63 metres.

Solution: Mofana is an object. African man is a class. The binary predicate between Mofana and African man is that of an instance. The binary predicate between Mofana and the attribute value of 1.63 metres is that of height. Thus, we have:

Introduction to Artificial Intelligence

© mofana mphaka

Page -91-

Introduction to Artificial Intelligence -92-

2.3.2 Quantification In Semantic Nets 2.3.2.1 Background: A universally quantified statement in semantic net can, at first level, be represented by class membership association. This will be the inheritance via the “is a” property link.

Example II, 2.11: Represent the universally quantified statement: All Basotho men are African, in semantic net.

Solution: In predicate calculus, this statement can be written thus: œX (mosotho(X) 6 african(X)). A semantic net representation is just a class membership association like:

a)

b)

The semantic net in a) presupposes that a Mosotho man is a typical example (i.e instance) of an African man. The semantic net in b) assumes that Mosotho man is a class (i.e all men who are the citizens of Lesotho) whose superset is African man (i.e all men who are Africans).

Example II, 2.12: Introduction to Artificial Intelligence

© mofana mphaka

Page -92-

Introduction to Artificial Intelligence -93Represent the following statement in a semantic net: Mofana bought his son a nice green apple.

Solution: We have three objects, namely apple, Mofana and his son. The apple has properties: colour with attribute value green and the texture - nice. The relationship between Mofana and the apple is that of buying. The relationship between the apple and Mofana’s son is that of a beneficiary. So, we could have:

2.3.2.2 Partitioned Semantic Nets:Sometimes statements may contain a mixture of more than one universally and existentially quantified variables. In order to show which variables are universally quantified we use partitioned semantic nets. In a partitioned semantic net, we have a node g, an instance of general statements (GS), with at least two arcs emanating from it. One arc is labelled form and the other labelled with the universal quantifier œ. The arc labelled “form” points to the semantic net enclosing the universally quantified statement. The arcs labelled with the universal quantifier go to each node, in the enclosed net, whose object or variable is universally quantified. The enclosed net is the scope of the universal quantifier. All nodes which have no arcs labelled with the universal quantifier are understood to be existentially quantified.

Example II, 2.13: Find a semantic net representation for each of the following statements: Introduction to Artificial Intelligence

© mofana mphaka

Page -93-

Introduction to Artificial Intelligence -94a) Every man needs a woman b) Man fears all kinds of death c) All commoners revere the king

Solution: a) In predicate calculus we could have: œM (man(M) 6 ›W (woman(W) v needs(M, W))). So, a semantic net representation could be:

S1 and S2 are semantic nets labels. S2 is contained in S1. The reader should note that there could be many semantic nets contained in S1 since there could be many different statements to be represented in one semantic net - S1. b) Predicate representation could be: œM,œD (man(M) v kind_of_death(D) 6 fears(M, D). Therefore, the semantic representation could be:

c) All commoners revere the king, in predicate calculus, may have the representation: œC commoner(C) 6 reveres(C, the king). So, a semantic representation could be:

Introduction to Artificial Intelligence

© mofana mphaka

Page -94-

Introduction to Artificial Intelligence -95-

2.3.2 Inheritance In Semantic Nets 2.3.2.1 Background: In predicate calculus, new knowledge could be inferred, from the current KB by the use of the resolution principle. In semantic nets we cannot infer new knowledge, we can only show direct or implicit associations among objects and their attribute values. An implicit association between an object and any attribute value in the net is called heritable information. To obtain heritable information we apply the inheritance algorithm. To find an association between any two objects or classes we use the intersection search.

2.3.2.2 The Inheritance Algorithm: To find an attribute or a property value for an object O with property P, in a semantic network, do the following: 1. Locate the object O in the network Call this node the current node (C) 2. If C has the property P then report the value pointed to by the arc labelled P and stop. 3. If C does not have the property P, then move up the class hierarchy, one level up, via the “instance” or “is a” property links to the next classes C. With each of these current classes C, do step 2. 4. If there are no inheritance links (i.e “is a” or “instance”) and step 1 or 2 has failed then report failure and stop.

Introduction to Artificial Intelligence

© mofana mphaka

Page -95-

Introduction to Artificial Intelligence -96-

2.3.2.2 Intersection Search: To find the association among the objects

do the following:

1. Locate the objects in the network. Call these objects current nodes , respectively. 2. From each current node, , propagate in all directions, a level at the time, all the property links to the next attribute value, object or class node, to or emanating from 3. If

.

has been reached before through a propagation from an object

are associated by 4. If

,

. Record the property link pointing

with property

, then

. Report this and stop propagation for

and and

.

has not yet been reached and there are property links emanating from it then repeat

from step 2 with

as the current node.

If on the other hand there are no property links emanating from

then report the fact that

object has no association with any of the other objects and repeat step 2 with the rest of the other objects.

Example II, 2.14: The following could be a typical example between any two Lesotho soccer teams: 1. 2. 3. 4. 5. 6. 7.

Potlako Ace Tšiu and Tšiliso Frisco Khomari are soccer players. Ace plays for Lioli and Frisco plays for Rovers. Ace is a mid-fielder and Frisco a forward. Mid-fielders have a shot range average of 30 - 50m. Forwards have a shot range average of 10 - 60m. Soccer players are adults with average height of 1.65m and a shot range average of 20 - 60m. Adults are averagely 1.5m high.

a) Represent this information using a semantic net. b) Use the inheritance algorithm to answer the following: i) Prove that Frisco’s height is averagely 1.65m ii) Explain why one cannot infer that Ace’s shot range average is 20 - 60m. c) Use the intersection search to find the association between Ace and Frisco.

Solution: a) Introduction to Artificial Intelligence

© mofana mphaka

Page -96-

Introduction to Artificial Intelligence -97-

b) i) We can infer (by inheritance) that Frisco’s height is averagely 1.65m as follows: 1. First we locate the object Frisco in the knowledge base and check all its attribute links for the one corresponding to "height average". In this case, we realise that the only attribute is "team" (i.e "team" "height average"). 2. Then we move up the class hierarchy to see if Frisco can inherit this property or attribute from some higher up class. We achieve this by moving up via the "instance" or "is a" link. In this case it is an "instance" link to "Forward". However again, none of the Forward attributes equals "height average". 3. So, we move yet again up via the "is a" link to "Soccer Player". Here, we find two attribute links, one of which equals "height average". The value stored for this attribute is "1.65m". Thus, Frisco inherits the attribute "height average" with the attribute value "1.65m". ii) We cannot infer that Ace’s shot range average is 20 - 60m.since Ace is a Mid-Fielder and mid-fielders have shot range average of 30-50m. That is, redefinition of attributes in the sub-classes overrides the definition of these attributes in the higher classes. Alternatively, the inheritance algorithm will always find the attribute "shot range average" at the "Mid-Fielder" node and report the stored value "30-50m" and then terminate. It will never move up to the "Soccer Player" node! Introduction to Artificial Intelligence

© mofana mphaka

Page -97-

Introduction to Artificial Intelligence -98c) Using our intersection search, Ace and Frisco are associated with one another by the nodes Soccer Players and Lesotho Soccer Teams. That is, they are both Lesotho Soccer Team Players as follows: By the algorithm, we first locate both Ace and Frisco nodes. When we propagate all the property links the first time around we arrive at the nodes Lioli and Mid-Fielder for Ace and Rovers and Forward for Frisco. The second time propagation makes us arrive at the nodes: Soccer Player and Lesotho Soccer Team.

2.3.3 Weaknesses In Semantic Nets Representation 2.3.3.1 Representing Rules In semantics nets we are only able to represent heritable information. This means that we cannot represent rules relating objects in such a way that we can have some inferential knowledge, that is information that could be deduced but not given directly as facts. A typical example of a rule that cannot be represented is semantic nets could be the grand parent relation: œX, œY, œZ(parent(X, Y) v parent(Y, Z) 6 grand_parent(X, Z)). This rule simply states that one’s grand parent is a parent of one’s parent. The reader is free to experiment with this example in order to verify or disprove our claim!

2.3.3.2 Multiple Slot Values There are occasions when an object could have the same attribute for multiple attribute values. It is somewhat bizarre to represent this situation in semantic nets. As an illustration, consider the following piece of information: People have height averages as: 1 - 1.5m, 1.5 - 2m, 2 - 2.2m. Now, if we want to represent this information in semantic nets, we have to find a binary predicate between the object people or person and each of the attribute values: 1 - 1.5m, 1.5 - 2m, 2 - 2.2m By inspection such a predicate will be height average. Thus, our bizarre representation would be:

Introduction to Artificial Intelligence

© mofana mphaka

Page -98-

Introduction to Artificial Intelligence -99-

Alas, we have now lost any relational uniqueness among the objects. Applying the inheritance algorithm we cannot now answer the question: What are people’s height averages?

2.4 Frames 2.4.1 Background Frames and semantic nets are nearly identical. They differ in representation. Semantic nets are a graphical representation for heritable information connected together by arcs and nodes. On the other hand, a frame system is a collection of named objects with heritable information from and/or to other named objects. The information in a named object is attributes or properties, usually referred to as slots, together with their (attribute) values (or slot values). The values are allowed to be simple values, functions, procedures or names of other (named) objects. When a value attribute of some object X is a name of another object Y then X will inherit all the attributes and values of Y.

Example II, 2.15: Represent the information: There is a Japanese Toyota made car. Its model is 1.6GL Corolla. Its year make is 1989 and has plate numbers D3315. in a frame system.

Introduction to Artificial Intelligence

© mofana mphaka

Page -99-

Introduction to Artificial Intelligence -100-

Solution: We could represent this information simply as a frame object as follows: CarD3315: make: Japanese manufacturer: Toyota model: Corolla 1.6GL year make: 1989 plate number: D3315. Alternatively, we could granularise this information and decide to put it in a frame system (i.e a collection of interacting) frame objects as follows: We could look at CarD3315 as some instance of a Japanese made car. In this scenario we could have the following representation: JapaneseMakeCar: make: Japanese manufacturer: {Toyota, Nissan, Mazda, ..., Mitsubishi} model: year make: plate number: CarD3315: instance: JapaneseMakeCar model: Corolla 1.6GL year make: 1989 plate number: D3315. The meaning here is that JapaneseMakeCar is a class object and any object which is a member of this class will inherit the attributes: make, manufacturer, model, year make and plate number. The attribute values for the attribute make will be Japanese unless the member object re-defines it to some other value. In the same vein, the attribute value for manufacturer is one of: Toyota, Nissan, Mazda, ..., Mitsubishi. The attribute values for the slots: model, year make and plate number have been left out as blanks. The understanding is that each member object will define its specific values.

Example II, 2.16: Represent the following in a frame system: Every person has a nationality which is from among: African, European and Asian. The person will have a name and hobbies. The hobbies will depend upon the nationality of the individual. For example, Africans will generally like: music, soccer and athletics. Asians will hobby in martial arts and athletics. European hobbies are mostly: soccer, athletics and rugby. Again an individual Introduction to Artificial Intelligence

© mofana mphaka

Page -100-

Introduction to Artificial Intelligence -101height average will also depend upon one’s nationality as follows: Africans have a height average of 1.6 to 2m. Asians are 1.5 to 1.8m and Europeans are 1.7 to 2m. Now, Mofana is an African, Tim an European and Chong Lee an Asian.

Solution: A frame system can be something like: Person: nationality: {African, Asian, European} height_avg: case nationality of African: 1.6 - 2m Asian: 1.5 - 1.8m European: 1.7 - 2m hobbies: case nationality of African: {music, soccer, athletics} Asian: {martial arts, athletics} European: {music, soccer, athletics} name: PersonMofana: instance: Person nationality: African name: Mofana PersonTim: instance: Person nationality: European name: Tim PersonChongLee: instance: Person nationality: Asian name: Chong Lee

2.4.2 Inheritance In Frames Inheritance in frames is achieved by allowing slot values to be names of other objects and propagating search as follows: To find a property P and attribute value V for an object X do:

Introduction to Artificial Intelligence

© mofana mphaka

Page -101-

Introduction to Artificial Intelligence -1021. Locate X in the system. Call X the current object O. 2. If O has property P as one of its attributes then record its corresponding property value V and stop. 3. If O has no property P then if O has any slot values which are names of other objects then let O be each of these objects in turn and repeat step 2. 4. If step 1 or step 2 failed then report failure and stop.

Example II, 2.17: Apply the inheritance algorithm in frames to the frame system of Example II, 2.11 above to answer the question: What is the manufacturer of CarD3315?

Solution: 1. First we locate the object CarD3315 in the frame system. 2. None of the attributes of CarD3315 is manufacturer. So, we look for any attribute whose slot value is a name of another object. This is the property instance and the named object is JapaneseCar. 3. We visit JapaneseCar and find that one of its properties is manufacturer with property values: Toyota, Nissan, Mazda, ..., Mitsubishi. Therefore, we stop. Thus, the manufacturer of CarD3315 is any of: Toyota, Nissan, Mazda, ..., Mitsubishi.

Example II, 2.18: Using the knowledge in Example II, 2.14, use the inheritance algorithm to find what are the hobbies of the person: Mofana.

Solution: 1. We locate the object PersonMofana. 2. None of the properties of this object is hobbies. Thus, we seek to inherit this attribute from higher classes by using the available instance link. 3. The instance property has a value which is named object Person. We visit object Person and find the attribute hobbies. The value of this property is found by a method (i.e procedure) as follows: since the nationality property of object PersonMofana is African then the available values are: music, soccer and athletics. Thus, Mofana’s hobbies are music, soccer and athletics.

Introduction to Artificial Intelligence

© mofana mphaka

Page -102-

Introduction to Artificial Intelligence -103-

2.5 Exercises 1. a) Represent the KB of Example II, 2.12 using a frame system representation. b) Using inheritance: i) Prove that Frisco’s height is averagely 1.65m ii) What is Ace’s shot range average? 2. Redo exercise 1 above using predicate calculus representation. 3. Given the following knowledge base (KB): 1. Mofana is the father of Lerato and Tshepo. 2. Mofella is the father of mofana. 3. ‘MaLerato is the mother of both Tshepo and Lerato. 4. Mantsho is the father of both ‘MaLerato and ‘Nabi. 5. All the following are males: Mofana, Tshepo, Mofella, Mantsho and ‘Nabi. 6. All the following are females: Lerato and ‘MaLerato. a) i) Represent it in predicate calculus and then ii) write (again predicate calculus) rules about: 1. Parent 2. Grand parent 3. Sister 4. Uncle. b) As the result of a), apply resolution refutation to answer the following: i) Who is Tshepo’s sister? ii) Who is Lerato’s grand parent? iii) Who is an uncle of whom? 4. a) Represent the KB given in exercise 3 above in a semantic net. b) Is it possible to represent the rules in 3 b) ii) above in a semantic net? 5. Redo exercise 3 above using a frame system representation. 6. a) Compare and contrast predicate calculus with semantic net representation by designing a problem in which i) predicate calculus representation is feasible but the semantic net representation is not. ii) semantic net representation is feasible but the predicate calculus representation is not. b) Repeat exercise a) above with frames and predicate calculus. 7. a) Represent the KB of Example II, 2.1 using a production system. Introduction to Artificial Intelligence

© mofana mphaka

Page -103-

Introduction to Artificial Intelligence -104b) Apply the recognise-act cycle of a production system to show that Marcus was not loyal to Caesar using: i) Forward chaining ii) Backward chaining. 8. Consider the KB: Rules: 1. œX (Person(X) v African(X) 6Likes(X, “political power”)). 2. œX (Person(X) v (honest(X) w Noble(X)) 6 ¬Likes(X, “political power”)). Facts: 1. 2. 4. 5.

African(C). Noble(C). African(Mosisili). Person(Mosisili).

Prove that Mosisili does not like political power, using a) Backward Reasoning Strategy b) Forward Reasoning Strategy 9. Consider the KB: 1. 2. 3. 4. 5. 6.

Basotho are people who are born in Lesotho Some people are not born in Lesotho, but are Basotho by citizenship. Basotho wear blankets (as their traditional gear) and eat a horse as a delicacy. Not everyone who is not a Mosotho will not eat a horse. Some non-Basotho also wear blankets. Zuma is not a Mosotho.

NB: Rule 2 also suggests that citizens of Lesotho are Basotho (i.e. implicit information). Prove that Not all Basotho do not wear blankets. Using

Introduction to Artificial Intelligence

© mofana mphaka

Page -104-

Introduction to Artificial Intelligence -105a) Backward Reasoning Strategy b) Forward Reasoning Strategy

10.

Consider the KB: 1. At the National University of Lesotho (NUL), all students, except science students, do the computer literacy course, code named CS1301. 2. There are non-science students who don’t do CS1301. 3. Sekola is a science student. a) Convert the KB to clause form. b) As a result of a) above or otherwise, prove by resolution refutation that: Some students don’t do CS1301.

11.

Consider the KB: 1. 2. 3. 4. 5. 6.

Basotho are a people with a culture that men are circumcised. There are Basotho men who are not circumcised. Basotho are Christians by religion. Some Basotho adopt other religions other than Christianity. All religious people pray God. Some people don’t pray God.

a) i) Encode the information in predicate calculus, as adequately as possible. ii) As a result of i) above, convert the KB to clause form. b) As a result of a) above, prove, using resolution refutation that: i) Circumcised men pray God. ii) Some Christians don’t pray God. 12.

Consider the following KB.: Rules: 1. œxœy (mother(x, y) w father(x, y) 6 parent(x, y) ) 2. œxœyœz(parent(x, y) v parent(y, z) 6 grand_parent(x, z)) Facts: 1. mother(mmalerato, lerato). 2. father(mantsho, mmalerato). Introduction to Artificial Intelligence

© mofana mphaka

Page -105-

Introduction to Artificial Intelligence -1063. father(mantsho, mosebatho). Compare and contrast 1. Backward Reasoning Strategy with 2. Forward Reasoning Strategy when proving/finding who mantsho is a grand parent to. Assume that rule application is always carried out depth-first. 13.

Consider the following knowledge base (KB): 1. 2. 3. 4.

Africans are people who like political power Some Africans are noble and so don’t like political power People who like political power are not noble and not honest Mosisili is an African man.

a) Encode the above KB in predicate logic, as adequately as possible. b) Consequent to a) above or otherwise, convert the KB to clause form c) As a result of b) above or otherwise, prove by Resolution Refutation that: i) Mosisili is not noble. ii) Mosisili is noble.

Introduction to Artificial Intelligence

© mofana mphaka

Page -106-

Introduction to Artificial Intelligence -107-

3. HEURISTIC SEARCH TECHNIQUES AND CONTROL STRATEGIES 3.1 Control Strategies 3.1.1 Backtracking We have already discussed backtracking under the section on ProLog’s control strategies. We, however, re-iterate it here for completeness. In a backtracking system, when an action or route is taken, towards satisfying the current goal, a choice point is recorded (detailing all the rules, operators applied, etc) so that if in the subsequent action the system runs into a dead end the system can undo all the actions taken back to the (last) choice point and the next alternative action selected. The sequence of all the successful choice points to the solution constitute a solution path or a deduction, if the goal was to prove a theorem. A backtracking strategy is therefore tentative. Further, it is general since it can be applied to any system. The control strategy is complete, since if there is any path that leads to a solution, it will eventually be found. The control strategy is, however, storage expensive since all the choice points need to be recorded. Further, the term “eventually” may mean a very long time. That is, the strategy may find a solution after a significant length of time or “eventually”could mean indefinitely.

3.1.2 Backward Chaining Backward chaining determines or controls the direction of search with regards to the initial state and the corresponding final state. A backward chainer starts with the goal state and tries to discover all the legal inverse moves that could have led to the final state. The sequence of these moves forms a solution path. Thus, a backward chaining strategy is sometimes referred to as goal directed reasoning. Since it is goal directed, a backward reasoning strategy is indeed an informed search since only goals or actions which can lead to the current state will be the ones considered. Backward chaining is exhaustive and, therefore, complete. It is also generally applicable. However, it is mostly employed in production systems.

Introduction to Artificial Intelligence

© mofana mphaka

Page -107-

Introduction to Artificial Intelligence -108-

3.1.2.1

Implementing Backward Chaining – backward.pro pp203 – 206

In prolog, we implement the backward reasoning strategy in the program, backward.pro, pp203 – 206. The program implements the backward reasoning strategy using the depth first search strategy in applying its rules. The algorithm is implemented as: Given a goal, G, to prove do 1. If G is a fact in the KB then STOP (if G 0 KB then G has been proven). 2. If G is not a fact in the KB then it could be deducible from rule application. So, 2.1 2.2

Select a rule, depth first, whose part infers G. Prove the part of the rule in backward reasoning strategy.

3. If G is a conjunction of goals, G1 and G2, then G would have been proven if 3.1 G1 has been proven (in a backward reasoning manner) AND 3.2 G2 has also been proven (in the same manner) 4. If G is a disjunction of goals, G1 or G2, then G would have been proven if, EITHER 4.1 G1 has been proven OR 4.2 G2 has been proven 5. If G is a negation of a goal, NOT G1, then prove G using the Negation as Failure Principle as follows: 5.1 Attempt to prove G1 and if it succeeds then conclude NOT G1 as false 5.2 Otherwise (G1 would have failed) so conclude NOT G1 as true In the following example we show, once again, how a backward reasoning system could go about finding a proof for a goal. We use the predicate calculus representation for the rules and facts for our KB.

Example II, 3.1 Given the KB: father(mofana, lerato). father(mofana, tshepo). father(mofella, mofana). mother(malerato, tshepo). mother(malerato, lerato). father(mantsho, malerato). father(mantsho, nabi). male(mofana). Introduction to Artificial Intelligence

© mofana mphaka

Page -108-

Introduction to Artificial Intelligence -109male(tshepo). male(mofella). male(mantsho). male(nabi). female(lerato). female(malerato) œX œY( mother(X, Y) w father(X, Y) 6 parent(X, Y)). œX œY œZ(parent(X, Z) v parent(Z, Y) 6 grand_parent(X, Y)). œX œY œZ(female(X) v parent(Z, X) v parent(Z, Y) 6 sister(X, Y)). œX œY œZ(male(X) v sister(Z, X) v parent(Z, Y) 6 uncle(X, Y)). Show how the goal sister(X, tshepo) could be proved in a backward reasoning strategy.

Solution: 1. Since we will use unification, let us use a different variable from X, say _1, since X appears in the KB. So our goal is sister(_1, tshepo). 2. Now, the rule that concludes the goal sister(_1, tshepo) is: female(_1) v parent(_2, _1) v parent(Z, tshepo) 6 sister(_1, tshepo) with the unification {tshepo/Y}. We have replaced Z with _2 for the same reasons as in step 1 above. Now, the new (sub)goals to satisfy or prove are: female(_1), parent(_2, _1), parent(_2, tshepo) Suppose these goals will be tried in order. That is, they will be picked one after the other in a top to down scan. 3. The goal female(_1) immediately succeeds since it is just a fact in the KB. The unification is {lerato/_1}. This unification binding is propagated throughout the KB. Thus, the subgoals to be proven will be instantiated to. parent(_2, lerato) parent(_2, tshepo) 4. Next, we try to satisfy the goal parent(_2, lerato). The rule that concludes this goal is:

Introduction to Artificial Intelligence

© mofana mphaka

Page -109-

Introduction to Artificial Intelligence -110mother(_2, lerato) w father(_2, lerato) 6 parent(_2, lerato) with the unification{_2/X, lerato/Y}. The new (sub)goal becomes: mother(_2, lerato) w father(_2, lerato) This is a disjunction with two goals. Now, suppose we use a short circuit technique. Then we will only pick one of the goals and if it succeeds we will not bother about the second goal since the disjunction will already be satisfied. Of course, if the chosen goal fails then we will backtrack and try the second goal. So, let us pick the goal mother(_2, lerato). 5. The goal mother(_2, lerato), will succeed since it is a fact in the KB. The unification is {malerato/_2}. The binding will again be propagated so that the remaining subgoal will be: parent(malerato, tshepo) 6. The goal: parent(malerato, tshepo) is concluded by the rule which is the same as the rule used in step 4 above. The unification bindings are {malerato/X, tshepo/Y}. The (sub)goal to prove will be: mother(malerato, tshepo) w father(malerato, tshepo). Using the same strategy as in step 4, we pick the goal mother(malerato, tshepo). 7. The goal mother(malerato, tshepo), succeeds since it is a fact in the KB. 8. Thus, the goal sister(lerato, tshepo), X = lerato, has been proven.

3.1.3 Forward Chaining A forward chainer starts with the initial state and, using the final state as a guide, applies all applicable rules (or actions) at any stage until the final stage is arrived at. The series of moves that led to the final stage describes a solution path. Since it reacts to the current data, a forward reasoning strategy is sometimes referred to as data driven reasoning. Again, since there is no direct connection between the goal and the current data, data driven reasoning is sometimes called a blind search.

Introduction to Artificial Intelligence

© mofana mphaka

Page -110-

Introduction to Artificial Intelligence -111Forward chaining is also exhaustive and, therefore, complete. Although generally applicable, it is, again, mainly employed in production systems.

3.1.3.1

Implementing Forward Chaining – forward.pro pp207 – 211

The program, forward.pro, is a Prolog program that implements forward reasoning strategy using the depth first search strategy in its rule application. The algorithm is implemented as: Given a goal, G, to prove do 1. If G is one of the elements, in the current working memory (CWM), then STOP (if G 0 CWM then G has been proven). 2. If G is not in the CWM then it could be deducible from rule application. So, 2.1 2.2 2.3

Select a rule, depth first, whose part matches the contents of the CWM. Fire the rule if it introduces new elements in the CWM else backtrack to 2.1 Go to 1.

3. If G is a conjunction of goals, G1 and G2, then G would have been proven if 3.1 G1 has been proven AND 3.2 G2 has also been proven 4. If G is a disjunction of goals, G1 or G2, then G would have been proven if, EITHER 4.1 G1 has been proven OR 4.2 G2 has been proven 5. If G is a negation of a goal, NOT G1 then prove G using the Negation as Failure Principle as follows: 4.1 Attempt to prove G1 and if it succeeds then conclude NOT G1 as false 4.2 Otherwise (G1 would have failed) so conclude NOT G1 as true

Example II, 3.2: Using the same KB as in Example II, 3.1 above, show how the goal sister(X, tshepo) could be proven in a forward reasoning strategy.

Solution: 1. The goal to prove is sister(_1, tshepo). 2. Now, with the current data (i.e facts) only one rule can fire. This is the rule: œX œY( mother(X, Y) w father(X, Y) 6 parent(X, Y)).

Introduction to Artificial Intelligence

© mofana mphaka

Page -111-

Introduction to Artificial Intelligence -112Applying this rule exhaustively, we add the following facts in the KB: 1st possible application, depth first, with unification {malerato/X, tshepo/Y} parent(malerato, tshepo). 2nd possible application by backtracking, depth first, with unification {malerato/X, lerato/Y} parent(malerato, lerato). 3rd possible application by backtracking, depth first, with unification {mofana/X, lerato/Y} parent(mofana, lerato). 4th possible application by backtracking, depth first, with unification {mofana/X, tshepo/Y} parent(mofana, tshepo). 5th possible application by backtracking, depth first, with unification {mofella/X, mofana/Y} parent(mofella, mofana). 6th possible application by backtracking, depth first, with unification {mantsho/X, malerato/Y} parent(mantsho, malerato). 7th possible application by backtracking, depth first, with unification {mantsho/X, nabi/Y} parent(mantsho, nabi). 3. After this rule application, we realise that our goal is still not available as one of the (generated) facts in the KB. So, we will repeat the process of rule application. With the (generated) facts so far, all the (other) rules can now apply. The rule that we used earlier will not be fired on this round since it is not going to add new facts (but rather duplicate facts). Applying the rule: œX œY œZ(parent(X, Z) v parent(Z, Y) 6 grand_parent(X, Y)). exhaustively with appropriate unification and backtracking, we add the following facts in the KB: grand_parent(mofella, lerato). grand_parent(mofella, tshepo). Introduction to Artificial Intelligence

© mofana mphaka

Page -112-

Introduction to Artificial Intelligence -113grand_parent(mantsho, tshepo). grand_parent(mantsho, lerato). Applying the rule: œX œY œZ(female(X) v parent(Z, X) v parent(Z, Y) 6 sister(X, Y)). exhaustively with appropriate unification and backtracking, we add the following facts in the KB sister(lerato, tshepo). sister(malerato, nabi). 4. Now, the goal sister(lerato, tshepo), X = lerato, is in the KB and we stop.

Example II, 3.3 Consider the knowledge base: 1. Marcus was a man 2. Marcus was a Pompian 3. All Pompians were Romans 4. Caesar was a ruler 5. All Romans were either loyal to Caesar or hated him. 6. Everyone is loyal to someone. 7. People only try to assassinate rulers they are not loyal to. 8. Marcus tried to assassinate Caesar a) Prove that Marcus was not loyal to Caesar, using i) Backward Reasoning Control Strategy. ii) Forward Reasoning Control Strategy. b) Prove that either Marcus was loyal to Caesar or he hated him, using i) Backward Reasoning Control Strategy. ii) Forward Reasoning Control Strategy.

Solution: For the ease of convenience the KB has been encoded in predicate calculus as follows: Facts: Fact 1: man(marcus). Fact 2: pompian(marcus). Introduction to Artificial Intelligence

© mofana mphaka

Page -113-

Introduction to Artificial Intelligence -114Fact 3: ruler(caesar). Fact 4: try_to_assassinate(marcus, caesar). Rules: Rule 1: Rule 2: Rule 3: Rule 4:

œX (pompian(X) 6 roman(X)) œY (roman(Y) 6 loyal_to(Y, caesar) w hate(Y, caesar)) œZ (man(Z) 6 ›W loyal_to(Z, W)) œU œV (man(U) v ruler(V) v try_to_assassinate(U, V) 6 ¬loyal_to(U, V))

a) i) 0. The goal is ¬loyal_to(marcus, caesar). Since this is not a fact in the KB, it must be deducible from rules. 1. There are four rules in the KB. The rule that concludes this goal is Rule 4, with unification {marcus/U, caesar/V}. The premises of this rule brings about the following (sub)goals to prove: 1.0 1.1 1.2

man(marcus) ruler(caesar) try_to_assassinate(marcus, caesar)

2. Goal 1.0, man(marcus), immediately succeeds since it is a fact in the KB (i.e. Fact 1). 3. Goal 1.1, ruler(caesar), also succeeds since it is a fact, Fact 3. 4. Goal 1.2 also succeeds since it is Fact 4. 5. Now, Goal 0 has been proven since all the premises hold (i.e. 1.0, 1.1 and 1.2 are true). So, Marcus was not loyal to Caesar by backward chaining. ii) 0. Once again, the goal is not a fact in the KB. So, it must be derivable from rule application. 1. Currently, rules that can fire (because their conditions are satisfied by the current KB) are: 1.0 1.1 1.2

Rule 1 Rule 3 Rule 4

2. Firing Rule 1, unification being {marcus/X}, we add the new fact:

Introduction to Artificial Intelligence

© mofana mphaka

Page -114-

Introduction to Artificial Intelligence -1152.0

roman(marcus)

3. Firing Rule 3, with unification {marcus/Z}, we add the fact: 3.0

›Z loyal_to(marcus, Z)

4. Firing Rule 4, unification being {marcus/U, caesar/V}, we add the new fact: 4.0

¬loyal_to(marcus, caesar)

5. Now, the initial goal has been achieved since it is now part of the KB. So, Marcus was not loyal to Caesar by forward reasoning. b) i) 0. The theorem is loyal_to(marcus, caesar) w hate(marcus, caesar). Since the goal is not a fact then it must be deducible from rule inference. 1. There is only one rule, Rule 2, that infers this goal. The unification is {marcus/Y}. The new (sub)goals now become: 1.0

roman(marcus)

2. The Goal 1.0 is not a fact in the KB. So, it must be inferred by some rules. 3. There is only one rule that infers this goal, Rule 1, with unification {marcus/X}. The rule has premises: 3.0

pompian(marcus)

4. Goal 3.0 is a fact in the KB. So, it succeeds. 5. From 4 above, Goal 1.0 succeeds. 6. As the result of 5 above, the initial goal, Goal 0, succeeds. Therefore, Marcus was loyal to Caesar or hated him by backward reasoning. ii) 0. The goal is not a fact in the KB. So, it must be as a result of rule application. 1. Rules that can fire are: 1.0 1.1 1.2

Rule 1 Rule 3 Rule 4

Introduction to Artificial Intelligence

© mofana mphaka

Page -115-

Introduction to Artificial Intelligence -1162. Firing Rule 1, unification being {marcus/X}, we add the new knowledge: 2.0

roman(marcus)

3. Firing Rule 3, with unification {marcus/Z}, we add: 3.0

›Z loyal_to(marcus, Z)

4. Firing Rule 4, unification being {marcus/U, caesar/V}, we add the new fact: 4.0

¬loyal_to(marcus, caesar)

5. Now, all the rules that could fire have been fired, but we still have our goal not as part of our facts base. So, we will proceed with rule application. 6. In this second round, all the rules can fire. However, using the heuristics that selects rules that add new facts in the KB, we have only one rule to fire, Rule 2. 7. Firing Rule 2, with unification {marcus/Y}, we add the fact: 7.0

loyal_to(marcus, caesar) w hate(marcus, caesar)

8. Now, our hypothesis has been generated by rule application. Thus, Marcus was loyal to Caesar or hated him by forward reasoning.

3.2 Search Strategies 3.2.1 Depth First Search We discussed the depth first strategy under our Prolog short tour. We, however, summarise it here again - in a way of emphasis. A depth first search strategy can be summarised as follows: At any stage, follow the first available solution path as far as possible, marking each choice point at each subsequent stages for any possible backtracking later on. Whenever a dead end is reached in the current solution path, always backtrack to the last choice point and consider, if any, the next available alternative and re-start the depth first search from there. Iterate in this manner until a solution has been found or all the alternatives have been exhausted, in which case, terminate with failure. As it is, the depth first search strategy is complete and, depending on the complexity of the partial solutions being explored, it may be economic on storage since only one partial solution is pursued

Introduction to Artificial Intelligence

© mofana mphaka

Page -116-

Introduction to Artificial Intelligence -117at the time . The only record kept is the indexing to the other solution paths for purposes of possible backtracking later on. The reader will note that in Example II, 3.1, the way in which we were picking goals, from a conjunction or a disjunction of (sub)goals, to satisfy was in a depth first search manner.

3.2.1 Breadth First Search At any solution stage, consider all possible solution paths emanating from the current choice point. Pursue each one of them one level deep or any k, k $1 levels deep. If the solution has been reached then stop otherwise apply the breadth first search at this current level. Iterate in this fashion until either the solution has been reached or all the solution paths have been exhausted. The breadth first search strategy is complete and no backtracking is required since it is exhaustive at each level of solution exploration (i.e all possible alternatives are explored k levels deep at the time). Further, the strategy can find the shortest solution path if it exists. In terms of storage, this strategy may be wasteful since at any level all the partial solutions are kept and explored. The reader will again note that in Example II, 3.2, the way in which we were picking goals, from a conjunction or a disjunction of (sub)goals, to satisfy was in a breadth first search manner.

3.3 Further Heuristic Search Algorithms 3.3.1 Background There are several heuristic search algorithms. We list here only a few: Generate and Test, Hill Climbing, Best First Search (A* Algorithm), Constraint Satisfaction and Means Ends Analysis. Most of these techniques could also shown to be reducible to one or more of the search strategies we have already studied above.

3.3.2 Generate and Test The algorithm for the Generate (hypotheses) and Test can be given as follows: 1. Generate possible solution(s) or solution path(s). 2. If the final state (solution) has been reached then stop 3. If solution not yet found then go to step 1. Generate and Test requires that the final state be known. The reader will note that this strategy Introduction to Artificial Intelligence

© mofana mphaka

Page -117-

Introduction to Artificial Intelligence -118is, in many respects, similar to forward reasoning strategy. Again, step 2 of the algorithm suggests that the algorithm could either be a depth first search or a breadth first search strategy as follows: If one partial solution is generated at a time then this will be the depth first search strategy. If on the other hand, all partial solutions are generated at the same level then this will be the breadth first search strategy.

3.3.3 Hill Climbing Hill climbing is very similar to the generate and test technique. It again needs that the final state be known. A heuristics function, returning a value of how close one is to the final state, is used to make the system move from the current partial solution to the next one which will be “better” or “closer” to the final solution than the current one. We summarise the algorithm as follows: 1. 2. 3. 4.

Let the initial state be the current state. If the current state is the final state then stop. If the current state is not yet the final state then generate all possible next states. Use the implemented heuristics function to select the “best” one and make it the current state and go to step 2.

Hill climbing has two problems associated with it: plateaux and local maxima or minima. These are the situations where no other better state can be found. In a plateaux all next states have the same value. Thus, it is difficult to choose the best one (to move to). In local maxima or minima, all the next states have worse values than the current state. Thus, one cannot move to any next state.

3.3.4 Best First Search Strategy When the breadth first and depth first search strategies are combined, as in the following, then the result is called the best first search strategy: 1. If the current state is the final solution then stop. 2. If the final solution has not been reached, generate all possible solution paths from the current state. 3. Using a suitable heuristic function, select the “most promising” next state (node). Let it be the current state (node) and go to step 1. Step 3 of this algorithm suggests that when the most promising next node is selected, the rest of the other unexpanded (unexplored) nodes will be marked for possible backtracking later on. If the chosen heuristic is good, the best first search strategy will find the shortest path to the solution with minimal cost - meaning that, in the end, comparatively, there would be fewer dead end solution paths traversed as would be the case if the raw depth first or breadth first search had Introduction to Artificial Intelligence

© mofana mphaka

Page -118-

Introduction to Artificial Intelligence -119been used. The best first search strategy has been modified to produce a family of derivative algorithms whose names all begin with A as the prefix. These algorithms include: The A algorithm, A* algorithm and AO* algorithm.

3.3.4.1

Example Programs Implementing The Best First Search Strategy

a) The Water Jug Program (Wtr-Jug.Pro; pp 177 – 183) The Rules:Rules in wtr-jug.pro are Prolog clauses of the form RuleNo from state(X1, Y1) to state(X2, Y2) :- Cond.

Where 1. 2. 3 4.

RuleNo is the rule index like rule1 or rule10. state(X1, Y1) is the initial/current state (before the rule is applied) state(X2, Y2) is the next/final state (after the rule has been applied) Cond is the condition that must hold before the rule can apply. Once the condition holds, then state(X1, Y1) is immediately transformed into state(X2, Y2).

The Best First Search Strategy:The best first search strategy is employed on rule application as follows: 1. Take all the rules in the KB. That is, rules are taken breadth first, bagof(Rule from State1 to AnyState2, Rule from State1 to AnyState2, ListOfRules),

2. Select the applicable rules, ApplicableRules , from the list of all the available rules, ListOfRules . An applicable rule is the one that does not introduce duplicate state transitions. select_applicable_rules(ListOfRules, DuplicateTransitions, ApplicabelRules),

3. Finally, select the best rule (from all the applicable rules) to apply and discard the rest. The best rule is the one that makes the distance between the current state and the final state (i.e. the goal state) the shortest. If we have the current state and the final state as: CurrState(X1, Y1), FinalState(X2, Y2)

Introduction to Artificial Intelligence

© mofana mphaka

Page -119-

Introduction to Artificial Intelligence -120The distance between the two is calculated as the norm: |X2 - X1| + |Y2 - Y1| select_best_rule(ApplicableRules, FinalState, Rule from State1 to State2),

b) The Standford Research Institute Problem Solver Formalism(strips.Pro; pp 177 – 183) The Operators:In this formalism, instead of rules, we have what are called operators. Operators are Prolog clauses of the form operator(OprNam e, preconditions : PreConditions, delete_list : DelList, add_list : AddList.

Where 1. OprName is the name of the operator. 2 PreConditions is a list of predicates (i.e. a possible conjunction of goals) that must hold (in the current KB) before the operator can be applied. 3. DelList is a list of predicates (i.e. a possible conjunction of goals) that no longer hold (i.e that must be deleted from the current KB) after the operator has been applied. 4. AddList is a list of predicates (i.e. a possible conjunction of goals) that are now true (i.e that must be added to the current KB) after the operator has been applied. The Best First Search Strategy:The best first search strategy, again, is employed on rule application as follows: 1. Consider all the operators in the KB. That is, consider operators in breadth first, bagof(opr(Opr, pre : PreCond, del : Del, add : Add), operator(Opr, preconditions : PreCond, delete_list : Del, add_list : Add, ListOfOprs ),

2. Select the applicable operators, ApplicableOprs , from the list of all the available operators, ListOfOprs . An applicable operator is the one that asserts the given goal. That is, the given goal appears as one of the elements of the add list of the operator. select_applicable_oprs(Goal, ListOfOprs, ApplicabelOprs),

3. Finally, select the best operator to apply and discard the rest. The best operator is the one whose precondition list is the “most satisfied” by the current KB. That is, the operator which has the largest number of preconditions satisfied by the current KB. In order to achieve this, we execute the following algorithm: Introduction to Artificial Intelligence

© mofana mphaka

Page -120-

Introduction to Artificial Intelligence -1213.1 3.2 3.3 3.4

Compute the length of each operator’s precondition list, X Compute the number of satisfied predicates in the operator’s precondition list, Y Compute and store the ratio, Y/X, for each operator In the end pick the operator, OprName, whose ratio Y/X is the largest – it is the operator whose precondition list is the most satisfied by the current KB. If two operators have the same ratio Y/X, then the preferred operator will be the one with the shorter add list. Such an operator is more specific. The operator that will have a longer add list will tend to be more general, the goal just happens to be one of the things that will be added. If the operators have equal add lists, then pick an operator with the least number of preconditions. Such an operator is specific and less expensive.

select_best_operator(opr(OprNam e, pre : PreCondions, del : DeleteList, add : AddList), ApplicableOprs, CKB ),

3.3.5 Means Ends Analysis & Constraint Satisfaction Constraint satisfaction and means ends analysis are usually employed by planning systems. In means ends analysis, operators are successively applied to reduce the distance or the difference between the goal state and the initial state. In this way, therefore, means ends analysis could be regarded as a combination of both forward and backward reasoning strategies. In constraint satisfaction, the difference between the initial state and the goal state is specified as a set of constraints to be satisfied by all the objects in the system. Objects in the system are viewed as forming a network since they will be interacting. Viewed this way, the network of nodes will be the objects and any arc connecting any two nodes will be the constraint that must hold between these two objects. When a constraint is modified then its modification should be propagated throughout the entire network to maintain the network stability.

3.4 Exercises 1. a) Design a problem in which forward reasoning strategy will find a solution with less storage space for the KB than backward reasoning. b) Repeat exercise a) above showing this time the superiority of backward reasoning to forward reasoning. 2. Repeat exercise 1 above with depth first and breadth first search strategies.

Introduction to Artificial Intelligence

© mofana mphaka

Page -121-

Introduction to Artificial Intelligence -1223. Describe, in detail, a game in which the best first search strategy can be exploited.

4. REFERENCES 1. Charniak E, McDermott D.; Introduction to Artificial Intelligence; Addison-Wesley Publishing Co.; 1985. 2. Nilsson J. N; Principles of Artificial Intelligence; Springer-Verlag; 1982. 3. Rich E., Knight K.; Artificial Intelligence; International Edition; McGraw Hill Inc.; 1991.

Introduction to Artificial Intelligence

© mofana mphaka

Page -122-