Practical Artificial Intelligence Programming in Java

Practical Artificial Intelligence Programming in Java Version 1.2, last updated January 1, 2006. by Mark Watson. This work is licensed under the Crea...
Author: Dinah Gibbs
9 downloads 2 Views 659KB Size
Practical Artificial Intelligence Programming in Java Version 1.2, last updated January 1, 2006. by Mark Watson.

This work is licensed under the Creative Commons Attribution NoDerivs NonCommercial License. Additional license terms:  No commercial use without author's permission  The work is published "AS IS" with no implied or expressed warranty - you accept all risks for the use of both the Free Web Book and any software program examples that are bundled with the work. This web book may be distributed freely in an unmodified form. Please report any errors to [email protected] and look occasionally at Open Content (Free Web Books) at www.markwatson.com for newer versions. Requests from the author I live in a remote area, the mountains of Northern Arizona and work remotely via the Internet. Although I really enjoy writing Open Content documents like this Web Book and working on Open Source projects, I earn my living as a consultant. I specialize in artificial intelligence applications, software design, and development in Java, Common Lisp, and Ruby. Please keep me in mind for consulting jobs!

Table of Contents Table of Contents........................................................................................................................................................................................2 Preface........................................................................................................................................................................................................5 Acknowledgements........................................................................................................................................................................................... ......5

Introduction................................................................................................................................................................................................6 Notes for users of UNIX and Linux..................................................................................................................................................................... ..7 Use of the Unified Modeling Language (UML) in this book........................................................................................................ .......................7

Chapter 1. Search.....................................................................................................................................................................................10 1.1 Representation of State Space, Nodes in Search Trees and Search Operators................................................................................10 1.2 Finding paths in mazes................................................................................................................................................................................ ...12 1.3 Finding Paths in Graphs.................................................................................................................................................................. ..............19 1.4 Adding heuristics to Breadth First Search................................................................................................................................................... .26 1.5 Search and Game Playing................................................................................................................................................................... ...........27 1.5.1 Alpha-Beta search................................................................................................................................................................................................................27 1.5.2 A Java Framework for Search and Game Playing................................................................................................................................................................28 1.5.3 TicTacToe using the alpha beta search algorithm................................................................................................................................................................33 1.5.4 Chess using the alpha beta search algorithm........................................................................................................................................................................38

Chapter 2. Natural Language Processing...............................................................................................................................................46 2.1 ATN Parsers........................................................................................................................................................................................ ............47 2.1.1 Lexicon data for defining word types..................................................................................................................................................................................50 2.1.2 Design and implementation of an ATN parser in Java.........................................................................................................................................................51 2.1.3 Testing the Java ATN parser.................................................................................................................................................................................................57

2.2 Using Prolog for NLP........................................................................................................................................................................ .............58 2.2.1 Prolog examples of parsing simple English sentences.........................................................................................................................................................58 2.2.2 Embedding Prolog rules in a Java application.....................................................................................................................................................................61

Chapter 3. Expert Systems........................................................................................................................................................................64 3.1 A tutorial on writing expert systems with Jess....................................................................................................................... ......................64 3.2 Implementing a reasoning system with Jess................................................................................................................................. ................70 3.3 Embedding Jess Expert Systems in Java Programs.................................................................................................................................... .74

Chapter 4. Genetic Algorithms.................................................................................................................................................................78 4.1 Java classes for Genetic Algorithms.................................................................................................................................................... ..........81 4.2 Example System for solving polynomial regression problems..................................................................................................................... 85

Chapter 5. Neural networks.....................................................................................................................................................................89 5.1 Hopfield neural networks............................................................................................................................................................................... 90 5.2 Java classes for Hopfield neural networks.................................................................................................................................................... 91 5.3 Testing the Hopfield neural network example class.......................................................................................................................... ...........93 5.5 Backpropagation neural networks................................................................................................................................................. ...............95 5.6 A Java class library and examples for using back propagation neural networks........................................................................ ..............97 5.7 Solving the XOR Problem...................................................................................................................................................................... ......105 5.8 Notes on using back propagation neural networks.................................................................................................................... ................106

6. Machine Learning using Weka..........................................................................................................................................................108 6.1 Using machine learning to induce a set of production rules.............................................................................................................. ........108 6.2 A sample learning problem....................................................................................................................................................... ...................109 6.3 Running Weka............................................................................................................................................................................................... 110

Chapter 7. Statistical Natural Language Processing............................................................................................................................112 7.1 Hidden Markov Models................................................................................................................................................................................ 112 7.2 Training Hidden Markov Models.................................................................................................................................................. ..............114 7.3 Using the trained Markov model to tag text................................................................................................................................ ...............116

Chapter 8. Bayesian Networks...............................................................................................................................................................117 Bibliography............................................................................................................................................................................................118 Index........................................................................................................................................................................................................120

For my grandson Calvin and granddaughter Emily

1

Preface This book was written for both professional programmers and home hobbyists who already know how to program in Java and who want to learn practical AI programming techniques. I have tried to make this a fun book to work through. In the style of a “cook book”, the chapters in this book can be studied in any order. Each chapter follows the same pattern: a motivation for learning a technique, some theory for the technique, and a Java example program that you can experiment with.

Acknowledgements I would like to thank Kevin Knight for writing a flexible framework for game search algorithms in Common LISP (Rich, Knight 1991) and for giving me permission to reuse his framework, rewritten in Java; the game search Java classes in Chapter 1 were loosely patterned after this Common LISP framework and allows new games to be written by sub classing three abstract Java classes. I would like to thank Sieuwert van Otterloo for writing the Prolog in Java program and for giving me permission to use it in this free web book. I would like to thank Ernest J. Friedman-Hill at Sandia National Laboratory for writing the Jess expert system toolkit. I would like to thank Christopher Manning and Hinrich Schutze for writing their “Foundations of Statistical Natural Language Processing” book; it is one of the most useful textbooks that I own. I would like to thank my wife Carol for her support in both writing this book, and all of my other projects. I would also like to acknowledge the use of the following fine software tools: IntelliJ Java IDE and the Poseidon UML modeling tool (www.gentleware.com).

2

Introduction This book provides the theory of many useful techniques for AI programming. There are relatively few source code listings in this book, but complete example programs that are discussed in the text should have been included in the same ZIP file that contained this web book. If someone gave you this web book without the examples, you can download an up to date version of the book and examples on the Open Content page of www.markwatson.com. All the example code is covered by the Gnu Public License (GPL). If the GPL prevents you from using any of the examples in this book, please contact me for other licensing terms. The code examples all consist of either reusable (non GUI) libraries and throw away test programs to solve a specific application problem; in some cases, the application specific test code will contain a GUI written in JFC (Swing). The examples in this book should be included in the same ZIP file that contains the PDF file for this free web book. The examples are found in the subdirectory src that contains: src src/expertsystem - Jess rule files src/expertsystem/weka - Weka machine learning files src/ga - genetic algorithm code src/neural - Hopfield and Back Propagation neural network code src/nlp src/nlp/Markov - a part of speech tagger using a Markov model src/nlp/prolog - NLP using embedded Prolog src/prolog - source code for Prolog engine written by Sieuwert van Otterloo src/search src/search/game - contains alpha-beta search framework and tic-tac-toe and chess examples src/search/statespace src/search/statespace/graphexample - graph search code src/search/statespace/mazeexample - maze search code

To run any example program mentioned in the text, simply change directory to the src directory that was created from the example program ZIP file from my web site. Individual example programs are in separate subdirectories contained in the src directory. Typing "javac *.java" will compile the example program contained in any subdirectory, and typing "java Prog" where Prog is the file name of the example program file with the file extension ".java" removed. None of the example programs (except for the NLBean natural language database interface) is placed in a separate package so compiling the examples will create compiled Java class files in the current directory.

3 I have been interested in AI since reading Bertram Raphael's excellent book "Thinking Computer: Mind Inside Matter" in the early 1980s. I have also had the good fortune to work on many interesting AI projects including the development of commercial expert system tools for the Xerox LISP machines and the Apple Macintosh, development of commercial neural network tools, application of natural language and expert systems technology, application of AI technologies to Nintendo and PC video games, and the application of AI technologies to the financial markets. I enjoy AI programming, and hopefully this enthusiasm will also infect the reader.

Notes for users of UNIX and Linux I use Mac OS X, Linux and Windows 2000 for my Java development. To avoid wasting space in this book, I show examples for running Java programs and sample batch files for Windows only. If I show in the text an example of running a Java program that uses JAR files like this: java -classpath nlbean.jar;idb.jar NLBean

the conversion to UNIX or Linux is trivial; replace “;” with “:” like this: java -classpath nlbean.jar:idb.jar NLBean If I show a command file like this c.bat file: javac -classpath idb.jar;. -d . nlbean/*.java jar cvf nlbean.jar nlbean/*.class del nlbean\*.class

Then a UNIX/Linux (or Mac OS X) equivalent using bash might look like this: #!/bin/bash javac -classpath idb.jar:. -d . nlbean/*.java jar cvf nlbean.jar nlbean/*.class rm -f nlbean/*.class

Use of the Unified Modeling Language (UML) in this book In order to discuss some of the example code in this book, I use Unified Modeling Language (UML) class diagrams. Figure 1 shows a simple UML class diagram that introduces the UML elements used in other diagrams in this book. Figure 1 contains one Java interface

4 Iprinter and three Java classes TestClass1, TestSubClass1, and TestContainer1. The following listing shows these classes and interface that do nothing except provide an example for introducing UML: Listing 1 – Iprinter.java public interface IPrinter { public void print(); }

Listing 2 – TestClass1.java public class TestClass1 implements IPrinter { protected int count; public TestClass1(int count) { this.count = count; } public TestClass1() { this(0); } public void print() { System.out.println("count="+count); } }

Listing 3 – TestSubClass1.java public class TestSubClass1 extends TestClass1 { public TestSubClass1(int count) { super(count); } public TestSubClass1() { super(); } public void zeroCount() { count = 0; } }

Listing 4 TestContainer1.java public class TestContainer1 { public TestContainer1() { } TestClass1 instance1; TestSubClass1 [] instances; }

Again, the code in Listings 1 through 4 is just an example to introduce UML. In Figure 1, note that both the interface and classes are represented by a shaded box; the interface I labeled. The shaded boxes have three sections: ● ● ●

Top section – name of the interface or class Middle section – instance variables Bottom section – class methods

In Figure 1, notice that we have three types of arrows:

5 ● ● ●

Dotted line with a solid arrowhead – indicates that TestClass1 implements the interface Iprinter Solid line with a solid arrowhead – indicates that TestSubClass1 is derived from the base class TestClass1 Solid line with lined arrowhead – used to indicate containment. The unadorned arrow from class TestContainer1 to TestClass1 indicates that the class TestContainer1 contains exactly one instance of the class TestClass1. The arrow from class TestContainer1 to TestSubClass1 is adorned: the 0..* indicates that the class TestContainer1 can contain zero or more instances of class TestSubClass1

Figure 1. Sample UML class diagram showing one Java interface and three Java classes This simple UML example should be sufficient to introduce the concepts that you will need to understand the UML class diagrams in this book.

6

Chapter 1. Search Early AI research emphasized the optimization of search algorithms. This approach made a lot of sense because many AI tasks can be solved by effectively by defining state spaces and using search algorithms to define and explore search trees in this state space. Search programs were frequently made tractable by using heuristics to limit areas of search in these search trees. This use of heuristics converts intractable problems to solvable problems by compromising the quality of solutions; this trade off of less computational complexity for less than optimal solutions has become a standard design pattern for AI programming. We will see in this chapter that we trade off memory for faster computation time and better results; often, by storing extra data we can make search time faster, and make future searches in the same search space even more efficient. What are the limitations of search? Early on, search applied to problems like checkers and chess mislead early researchers into underestimating the extreme difficulty of writing software that performs tasks in domains that require general world knowledge or deal with complex and changing environments. These types of problems usually require the understanding and then the implementation of domain specific knowledge. In this chapter, we will use three search problem domains for studying search algorithms: path finding in a maze, path finding in a static graph, and alpha-beta search in the games: tic-tac-toe and chess. The examples in this book should be included in the examples ZIP file for this book. The examples for this chapter are found in the subdirectory src that contains: ● ● ● ● ● ●

src src/search src/search/game – contains alpha-beta search framework and tic-tac-toe and chess examples src/search/statespace src/search/statespace/graphexample – graph search code src/search/statespace/mazeexample – maze search code

1.1 Representation of State Space, Nodes in Search Trees and Search Operators We will use a single search tree representation in graph search and maze search examples in this chapter. Search trees consist of nodes that define locations in state space and links to other nodes. For some small problems, the search tree can be easily specified statically; for example, when performing search in game mazes, we can precompute a search tree for the state space of the maze. For many problems, it is impossible to completely enumerate a search tree for a state space so we must define successor node search operators that for a given node produce all nodes that can reached from the current node in one step; for example, in the game of

7 chess we can not possibly enumerate the search tree for all possible games of chess, so we define a successor node search operator that given a board position (represented by a node in the search tree) calculates all possible moves for either the white or black pieces. The possible chess moves are calculated by a successor node search operator and are represented by newly calculated nodes that are linked to the previous node. Note that even when it is simple to fully enumerate a search tree, as in the game maze example, we still might want to generate the search tree dynamically as we will do in this chapter). For calculating a search tree we use a graph. We will represent graphs as node with links between some of the nodes. For solving puzzles and for game related search, we will represent positions in the search space with Java objects called nodes. Nodes contain arrays of references to both child and parent nodes. A search space using this node representation can be viewed as a directed graph or a tree. The node that has no parent nodes is the root node and all nodes that have no child nodes a called leaf nodes. Search operators are used to move from one point in the search space to another. We deal with quantized search spaces in this chapter, but search spaces can also be continuous in some applications. Often search spaces are either very large or are infinite. In these cases, we implicitly define a search space using some algorithm for extending the space from our reference position in the space. Figure 1.1 shows representations of search space as both connected nodes in a graph and as a two-dimensional grid with arrows indicating possible movement from a reference point denoted by R.

Figure 1.1 a directed graph (or tree) representation is shown on the left and a two-dimensional grid (or maze) representation is shown on the right. In both representations, the letter R is used to represent the current position (or reference point) and the arrowheads indicate legal moves generated by a search operator. In the maze representation, the two grid cells are marked with an X indicate that a search operator cannot generate this grid location. When we specify a search space as a two-dimensional array, search operators will move the point of reference in the search space from a specific grid location to an adjoining grid location. For some applications, search operators are limited to moving up/down/left/right and in other applications; operators can additionally move the reference location diagonally. When we specify a search space using node representation, search operators can move the reference point down to any child node or up to the parent node. For search spaces that are represented implicitly, search operators are also responsible for determining legal child nodes, if any, from the reference point.

8 Note: I use slightly different libraries for the maze and graph search examples. I plan to clean up this code in the future and have a single abstract library to support both maze and graph search examples.

1.2 Finding paths in mazes The example program used in this section is MazeSearch.java in the directory src/search/maze and I assume that the reader has downloaded the entire example ZIP file for this book and placed the source files for the examples in a convenient place. Figure 1.2 shows the UML class diagram for the maze search classes: depth first and breadth first search. The abstract base class AbstractSearchEngine contains common code and data that is required by both the classes DepthFirstSearch and BreadthFirstSearch. The class Maze is used to record the data for a two-dimensional maze, including which grid locations contain walls or obstacles. The class Maze defines three static short integer values used to indicate obstacles, the starting location, and the ending location.

9

Figure 1.2 UML class diagram for the maze search Java classes The Java class Maze defines the search space. This class allocates a two-dimensional array of short integers to represent the state of any grid location in the maze. Whenever we need to store a pair of integers, we will use an instance of the standard Java class java.awt.Dimension, which has two integer data components: width and height. Whenever we need to store an x-y grid location, we

10 create a new Dimension object (if required), and store the x coordinate in Dimension.width and the y coordinate in Dimension.height. As in the right hand side of Figure 1.1, the operator for moving through the search space from given x-y coordinates allows a transition to any adjacent grid location that is empty. The Maze class also contains the x-y location for the starting location (startLoc) and goal location (goalLoc). Note that for these examples, the class Maze sets the starting location to grid coordinates 0-0 (upper left corner of the maze in the figures to follow) and the goal node in (width – 1)-(height – 1) (lower right corner in the following figures). The abstract class AbstractSearchEngine is the base class for both DepthFirstSearchEngine and BreadthFirstSearchEngine. We will start by looking at the common data and behavior defined in AbstractSearchEngine. The class constructor has two required arguments: the width and height of the maze, measured in grid cells. The constructor defines an instance of the Maze class of the desired size and then calls the utility method initSearch to allocate an array searchPath of Dimension objects, which will be used to record the path traversed through the maze. The abstract base class also defines other utility methods: ● ●

equals(Dimension d1, Dimension d2) – checks to see if two Dimension arguments are the same getPossibleMoves(Dimension location) – returns an array of Dimension objects that can be moved to from the specified location. This implements the movement operator.

Now, we will look at the depth first search procedure. The constructor for the derived class DepthFirstSearchEngine calls the base class constructor and then solves the search problem by calling the method iterateSearch. We will look at this method in some detail. The arguments to iterate search specify the current location and the current search depth: private void iterateSearch(Dimension loc, int depth) {

The class variable isSearching is used to halt search, avoiding more solutions, once one path to the goal is found. if (isSearching == false) return;

We set the maze value to the depth for display purposes only: maze.setValue(loc.width, loc.height, (short)depth);

Here, we use the super class getPossibleMoves method to get an array of possible neighboring squares that we could move to; we then loop over the four possible moves (a null value in the array indicates an illegal move): Dimension [] moves = getPossibleMoves(loc); for (int i=0; i cook_food cook_food => eat_food

This substitution of symbols makes a difference for human readers, but a production system interpreter does not care. Still the form of these three rules is far too simple to encode interesting knowledge. We will extend the form of the rules by allowing: Variables in both the LHS and RHS terms of production rules Multiple LHS and RHS terms in rules The ability to perform arithmetic operations in both LHS and RHS terms

The use of variables in rules is crucial since it allows us to properly generalize knowledge encoded in rules. The use of multiple LHS terms allows us to use compound tests on environmental data. In an English syntax, if we use a question mark to indicate a variable, rather than a constant, a rule might look like this: If

Then

have food ?food_object ?food_object is_frozen ?food_object weight ?weight have microwave_oven place ?food_object in microwave_oven set microwave_oven timer to (compute ?weight * 10) turn on microwave_oven

This rule is still what I think of as a stimulus-response rule; higher-level control rules might set a goal of being hungry that would enable this rule to execute. We need to add an additional LHS term to allow higher-level control rules to set a “prepare food” goal; we can rewrite this rule and add an additional rule that could execute after the first rule (additional terms and rules are shown in italic): If

Then

state equals I_am_hungry have food ?food_object ?food_object is_frozen ?food_object weight ?weight have microwave_oven place ?food_object in microwave_oven set microwave_oven timer to (compute ?weight * 10) turn on microwave_oven set state to (I_am_hungry and food_in_microwave)

64 set microwave_food to ?food_object If

Then

state equals food_in_microwave microwave_timer ?value < 0 microwave_food ?what_food_is_cooking remove ?what_food_is_cooking from microwave eat ?what_food_is_cooking

A higher-level control rule could set an environmental variable state to the value I_am_hungry, which would allow the RHS terms of this rule to execute if the other four LHS terms matched environmental data. We have assumed that rules match their LHS terms with environmental data. This environmental data and higher-level control data is stored in working memory. If we use a weak analogy, production rules are like human long-term memory, and the transient data in working memory is like human short-term memory. The production rules are stored in production memory. We will see in the next section how the OPS5/CLIPS language supports structured working memory that is matched against the LHS terms of rules stored in production memory. The examples in this chapter were developed using version 5.1 of Jess, but they should run fine using the latest version. If you get the Jess ZIP file from Ernest Friedman-Hill’s web site (linked from my web site also), UNZIP the file creating a directory Jess51 and copy the examples in src/expertsystem to the top level Jess51 directory (The directory Jess51 will be Jess52 of Jess60 if you are uing a newer version of Jess).. The example shown previously in an “English-like format” is converted to a Jess notation and is available in the file food.clp and is shown below with interpersed comments. The following statement, deftemplate, is used to define position-independent data structures. In this case, we are defining a structure have_food that contains named slots name, weight, and is_frozen: (deftemplate have_food (slot name) (slot weight) (slot is_frozen (default no)))

Notice that the third slot is_frozen is given a default value of “no”. It is often useful to set default slot values when a slot usually has a value, with rare exceptions. The Jess interpreter always runs rules with zero LHS terms first. Here, I use a rule called startup that has zero LHS terms so this rule will fire when the system is reset using the build in function reset. The startup rule in this example adds to data elements to working memory: (defrule startup => (assert (have_food (name spinach) (weight 10))) (assert (have_food (name peas) (weight 14) (is_frozen yes))))

65

This rule startup creates two structured working memory elements. Once these working memory elements (or facts) are created, then, using the Rete algorithm, the Jess runtime system automatically determines which other rules are eligible to execute. As it happens, the following rule is made eligible to execute (i.e., it is entered into the conflict set of eligible rules) after the two initial working memory elements are created by the rule startup: (defrule thaw-frozen-food "thaw out some food" ?fact (retract ?fact) (assert (have_food (name ?name) (weight ?w) (is_frozen no))) (printout t "Using the microwave to that out " ?name crlf) (printout t "Thawing out " ?name " to produce " ?w " ounces of " ?name crlf))

We see several new features in the rule thaw-frozen-food:    

The LHS pattern (only one in this rule, but there could be many) is assigned to a variable ?fact that will reference the specific working memory element that matched the LHS pattern. The first RHS action (retract ?fact) removes from working memory the working memory element that instantiated this rule firing; note that this capability did not exist in the simple reasoning system implemented in Chapter 3. The second RHS action asserts a new fact into working memory; the variables ?name and ?w are set to the values matched in the first LHS pattern. Setting the slot is_frozen is not necessary in this case because we are setting it to its default value. The third and fourth RHS actions prints out (the “t” indicates print to standard output) messages to the user. Note that the Jess printout function can also print to an opened file.

The next two lines in the input file reset the system (making the rule startup eligible to execute) and runs the system until no more rules are eligible to execute: (reset) (run)

To run this example, copy the contents of the directory from the directory src/expertsystem to the top level Jess51 directory, change directory to the top level Jess directory, and then type the following: javac jess/*.java java jess.Main food.clp

The first statement compiles the Jess system; you only need to do this one time. Here is the output that you will see:

66 Jess, the Java Expert System Shell Copyright (C) 1998 E.J. Friedman Hill and the Sandia Corporation Jess Version 5.1 4/24/2000 Using the microwave to thaw out peas Thawing out peas to produce 14 ounces of peas

This example is simple and tutorial in nature, but will give the reader sufficient knowledge to read and understand the examples that come with the Jess distribution. You should pause to experiment with the Jess system before continuing on with this chapter.

3.2 Implementing a reasoning system with Jess In this section, we will see how to design and implement a reasoning system using a forward chaining production system interpreter like CLIPS or Jess. There are two common uses for reasoning in expert systems: 1. Perform meta-level control of rule firing (e.g., prefer rules that indicate the use of cheaper resources, prefer rules written by experts over novices, etc.) 2. Implement a planning/reasoning system using forward chaining rules We will choose option 2 for the example in this section. The following source listing is in the file src/expertsystem/reasoning.clp. This listing is interspersed with comments explaining the code: We define three working memory data templates to solve this problem: the state of a block, an old state of a block (to avoid executing rules in infinite cycles or loops), and a goal that we are trying to reach. The template block has three slots: name, on_top_of, and supporting: (deftemplate block (slot name) (slot on_top_of (default table)) (slot supporting (default nothing)))

The template old_block_state has the same three named slots as the template block: (deftemplate old_block_state (slot name) (slot on_top_of (default table)) (slot supporting (default nothing)))

67 The template goal has two slots: a supporting block name and the block sitting on top of this first block): (deftemplate goal (slot supporting_block) (slot supported_block))

As with our previous Jess example, the rule startup is eligible to execute after calling the functions (reset) and (run). (defrule startup "This is executed when (reset) (run) is executed" => (assert (goal (supporting_block C) (supported_block A))) (assert (block (name A) (supporting B))) (assert (block (name B) (on_top_of A) (supporting C))) (assert (block (name C) (on_top_of B))) (assert (block (name D))) (assert (block (name table) (supporting A))) (assert (block (name table) (supporting C))))

Figure 3.1 shows the initial block setup and the goal state created by the rule startup.

Figure 3.1 The goal set up in the rule startup is to get block a on top of block c The following rule set-block-on attempts to move one block on top of another. There are two preconditions for this rule to fire: 

Both blocks must not have any other blocks on top of them

68 

We can not already have cleared the bottom block (this condition is prevent infinite loops)

We see a condition on the matching process in the second LHS element: the matching variable ?block_2 can not equal the matching variable ?block_1. Here, we use the not-equals function neq; the corresponding equals function is eq. (defrule set-block-on "move ?block_1 to ?block_2 if both are clear" ?fact1