Spring 2004

CMPT 882

Instructor: Oliver Schulte

Spring 2004

CMPT 882

Instructor: Oliver Schulte

Due: Tuesday, February 24, in class.

Topic: Learning Bayes Nets.

The purpose of this assignment is for you to become familiar with Bayes nets and how they represent knowledge. I would like you to accomplish two things: First, acquire some software that allows you to build a simple Bayes net of your own. Second, experiment with a dataset so that you get a feel for what Bayes net learners do, in particular with respect to finding causal relations among variables. The remainder of this web page gives more details and suggestions on how to accomplish these goals.

Building Bayes Nets

Building your own Bayes Net will help you understand how Bayesian Nets represent knowledge. Read the text first to get an idea of the general theory.

1. Run some software that lets you build a Bayesian net.

For C/Unix, we have the software from the text and various freeware packages from the Machine Learning Network web site. Hugin has a demo version (Hugin Lite) for Solaris. Other suggestions or your experience with these sources are welcome. The nicest software to use comes from commercial companies, not surprisingly, who mostly have software for the PC and sometimes for the Mac. Hugin again works for these platforms. At least the PC version was a pretty large executable. An alternative for the PC is Bayesware's trial version. You might find visiting Bayesware's website worthwhile in any case, especially if you are interested in machine learning companies. Bill Havens recommends Netica which supports a number of platforms (including Unix, I believe).

For Java versions, Glendon Holst recommends UBC's CIspace. CMU's Tetrad project has a Java interface which seems to be okay.

(A note on Hugin: You will have to point it to a web browser executable to get it to run.)

2. Build a Bayesian Net. The Hugin and Bayesware software comes with tutorials that you can go through. (A note on Hugin: The program doesn't always quite work like the tutorial, but it's easy to see what to do.) Please show me print-outs from one of those tutorials, or from a comparable task. If you are not sure what to do, you can look at the Hugin tutorial on-line . Basically, they step you through building a 3-node Bayes net and running a simple query on it. If you are using Hugin, just do that. If you are using something else, you ought to be able to build their 3-node net in your software too, and to replicate their query.

3. Try out a naive Bayesian classifier. If you can run the one from the text on the text classification data, that would be fine. An alternative exercise would be to go through the example in Section 6.9.1, which constructs a Bayesian classifier for our familiar "play tennis" example. You could build this classifier in your software and enter one or two instances to see how it classifies them.

4. Try running a largish dataset and see what the causal inferences are. A very nice tutorial for this, especially good for discussing issues about causality is the B-course tutorial. This is web-based so you don't have to worry about installing anything. You could run one of the data sets, e.g. "Popular kids", in various iterations and check out the causal graphs. What is the difference between the "naive causal model" and the "not-so-naive" one? Anything that strikes you as particularly impressive or counterintuitive about the models inferred by these two Bayes Net learners?

(The B-course has a "java playground" that allows you to ask queries (make predictions) given the Bayes net that it constructed before. You might find it instructive to play around with that to get a sense of the predictive power of Bayesian nets. It's not strictly required for this assignment, however.)