Chapter 13 Case study: data structure selection13.1 Word frequency analysisAs usual, you should at least attempt the following exercises before you read my solutions. Exercise 1
Write a program that reads a file, breaks each line into
words, strips whitespace and punctuation from the words, and
converts them to lowercase. Hint: The string module provides strings named whitespace, which contains space, tab, newline, etc., and punctuation which contains the punctuation characters. Let’s see if we can make Python swear: >>> import string >>> print string.punctuation !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ Also, you might consider using the string methods strip, replace and translate. Exercise 2 Go to Project Gutenberg (gutenberg.org) and download your favorite out-of-copyright book in plain text format. Modify your program from the previous exercise to read the book you downloaded, skip over the header information at the beginning of the file, and process the rest of the words as before. Then modify the program to count the total number of words in the book, and the number of times each word is used. Print the number of different words used in the book. Compare different books by different authors, written in different eras. Which author uses the most extensive vocabulary? Exercise 3
Modify the program from the previous exercise to print the
20 most frequently-used words in the book.
Exercise 4
Modify the previous program to read a word list (see
Section 9.1) and then print all the words in the book that
are not in the word list. How many of them are typos? How many of
them are common words that should be in the word list, and how
many of them are really obscure?
13.2 Random numbersGiven the same inputs, most computer programs generate the same outputs every time, so they are said to be deterministic. Determinism is usually a good thing, since we expect the same calculation to yield the same result. For some applications, though, we want the computer to be unpredictable. Games are an obvious example, but there are more. Making a program truly nondeterministic turns out to be not so easy, but there are ways to make it at least seem nondeterministic. One of them is to use algorithms that generate pseudorandom numbers. Pseudorandom numbers are not truly random because they are generated by a deterministic computation, but just by looking at the numbers it is all but impossible to distinguish them from random. The random module provides functions that generate pseudorandom numbers (which I will simply call “random” from here on). The function random returns a random float between 0.0 and 1.0 (including 0.0 but not 1.0). Each time you call random, you get the next number in a long series. To see a sample, run this loop: import random for i in range(10): x = random.random() print x The function randint takes parameters low and high and returns an integer between low and high (including both). >>> random.randint(5, 10) 5 >>> random.randint(5, 10) 9 To choose an element from a sequence at random, you can use choice: >>> t = [1, 2, 3] >>> random.choice(t) 2 >>> random.choice(t) 3 The random module also provides functions to generate random values from continuous distributions including Gaussian, exponential, gamma, and a few more. Exercise 5 Write a function named >>> t = ['a', 'a', 'b'] >>> h = histogram(t) >>> print h {'a': 2, 'b': 1} your function should ’a’ with probability 2/3 and 13.3 Word histogramHere is a program that reads a file and builds a histogram of the words in the file: import string def process_file(filename): h = dict() fp = open(filename) for line in fp: process_line(line, h) return h def process_line(line, h): line = line.replace('-', ' ') for word in line.split(): word = word.strip(string.punctuation + string.whitespace) word = word.lower() h[word] = h.get(word, 0) + 1 hist = process_file('emma.txt') This program reads emma.txt, which contains the text of Emma by Jane Austen.
Finally, To count the total number of words in the file, we can add up the frequencies in the histogram: def total_words(h): return sum(h.values()) The number of different words is just the number of items in the dictionary: def different_words(h): return len(h) Here is some code to print the results: print 'Total number of words:', total_words(hist) print 'Number of different words:', different_words(hist) And the results: Total number of words: 161073 Number of different words: 7212 13.4 Most common wordsTo find the most common words, we can apply the DSU pattern;
def most_common(h): t = [] for key, value in h.items(): t.append((value, key)) t.sort(reverse=True) return t Here is a loop that prints the ten most common words: t = most_common(hist) print 'The most common words are:' for freq, word in t[0:10]: print word, '\t', freq And here are the results from Emma: The most common words are: to 5242 the 5204 and 4897 of 4293 i 3191 a 3130 it 2529 her 2483 was 2400 she 2364 13.5 Optional parametersWe have seen built-in functions and methods that take a variable number of arguments. It is possible to write user-defined functions with optional arguments, too. For example, here is a function that prints the most common words in a histogram def print_most_common(hist, num=10) t = most_common(hist) print 'The most common words are:' for freq, word in t[0:num]: print word, '\t', freq The first parameter is required; the second is optional. The default value of num is 10. If you only provide one argument: print_most_common(hist) num gets the default value. If you provide two arguments: print_most_common(hist, 20) num gets the value of the argument instead. In other words, the optional argument overrides the default value. If a function has both required and optional parameters, all the required parameters have to come first, followed by the optional ones. 13.6 Dictionary subtractionFinding the words from the book that are not in the word list from words.txt is a problem you might recognize as set subtraction; that is, we want to find all the words from one set (the words in the book) that are not in another set (the words in the list). subtract takes dictionaries d1 and d2 and returns a new dictionary that contains all the keys from d1 that are not in d2. Since we don’t really care about the values, we set them all to None. def subtract(d1, d2): res = dict() for key in d1: if key not in d2: res[key] = None return res To find the words in the book that are not in words.txt,
we can use words = process_file('words.txt') diff = subtract(hist, words) print "The words in the book that aren't in the word list are:" for word in diff.keys(): print word, Here are some of the results from Emma: The words in the book that aren't in the word list are: rencontre jane's blanche woodhouses disingenuousness friend's venice apartment ... Some of these words are names and possessives. Others, like “rencontre,” are no longer in common use. But a few are common words that should really be in the list! Exercise 6 Python provides a data structure called set that provides many common set operations. Read the documentation at docs.python.org/lib/types-set.html and write a program that uses set subtraction to find words in the book that are not in the word list. 13.7 Random wordsTo choose a random word from the histogram, the simplest algorithm is to build a list with multiple copies of each word, according to the observed frequency, and then choose from the list: def random_word(h): t = [] for word, freq in h.items(): t.extend([word] * freq) return random.choice(t) The expression [word] * freq creates a list with freq copies of the string word. The extend method is similar to append except that the argument is a sequence. Exercise 7
This algorithm works, but it is not very efficient; each time you choose a random word, it rebuilds the list, which is as big as the original book. An obvious improvement is to build the list once and then make multiple selections, but the list is still big. An alternative is:
Write a program that uses this algorithm to choose a random word from the book. 13.8 Markov analysisIf you choose words from the book at random, you can get a sense of the vocabulary, you probably won’t get a sentence: this the small regard harriet which knightley's it most things A series of random words seldom makes sense because there is no relationship between successive words. For example, in a real sentence you would expect an article like “the” to be followed by an adjective or a noun, and probably not a verb or adverb. One way to measure these kinds of relationships is Markov analysis1, which characterizes, for a given sequence of words, the probability of the word that comes next. For example, the song Eric, the Half a Bee begins: Half a bee, philosophically, In this text, the phrase “half the” is always followed by the word “bee,” but the phrase “the bee” might be followed by either “has” or “is”. The result of Markov analysis is a mapping from each prefix (like “half the” and “the bee”) to all possible suffixes (like “has” and “is”). Given this mapping, you can generate a random text by starting with any prefix and choosing at random from the possible suffixes. Next, you can combine the end of the prefix and the new suffix to form the next prefix, and repeat. For example, if you start with the prefix “Half a,” then the next word has to be “bee,” because the prefix only appears once in the text. The next prefix is “a bee,” so the next suffix might be “philosophically,” “be” or “due.” In this example the length of the prefix is always two, but you can do Markov analysis with any prefix length. The length of the prefix is called the “order” of the analysis. Exercise 8
Markov analysis:
13.9 Data structuresUsing Markov analysis to generate random text is fun, but there is also a point to this exercise: data structure selection. In your solution to the previous exercises, you had to choose:
Ok, the last one is the easy; the only mapping type we have seen is a dictionary, so it is the natural choice. For the prefixes, the most obvious options are string, list of strings, or tuple of strings. For the suffixes, one option is a list; another is a histogram (dictionary). How should you choose? The first step is to think about the operations you will need to implement for each data structure. For the prefixes, we need to be able to remove words from the beginning and add to the end. For example, if the current prefix is “Half a,” and the next word is “bee,” you need to be able to form the next prefix, “a bee.” Your first choice might be a list, since it is easy to add and remove elements, but we also need to be able to use the prefixes as keys in a dictionary, so that rules out lists. With tuples, you can’t append or remove, but you can use the addition operator to form a new tuple: def shift(prefix, word): return prefix[1:] + (word,) shift takes a tuple of words, prefix, and a string, word, and forms a new tuple that has all the words in prefix except the first, and word added to the end. For the collection of suffixes, the operations we need to perform include adding a new suffix (or increasing the frequency of an existing one), and choosing a random suffix. Adding a new suffix is equally easy for the list implementation or the histogram. Choosing a random element from a list is easy; choosing from a histogram is harder to do efficiently (see Exercise 13.7). So far we have been talking mostly about ease of implementation, but there are other factors to consider in choosing data structures. One is run time. Sometimes there is a theoretical reason to expect one data structure to be faster than other; for example, I mentioned that the in operator is faster for dictionaries than for lists, at least when the number of elements is large. But often you don’t know ahead of time which implementation will be faster. One option is to implement both of them and see which is better. This approach is called benchmarking. A practical alternative is to choose the data structure that is easiest to implement, and then see if it is fast enough for the intended application. If so, there is no need to go on. If not, there are tools, like the profile module, that can identify the places in a program that take the most time. The other factor to consider is storage space. For example, using a histogram for the collection of suffixes might take less space because you only have to store each word once, no matter how many times it appears in the text. In some cases, saving space can also make your program run faster, and in the extreme, your program might not run at all if you run out of memory. But for many applications, space is a secondary consideration after run time. One final thought: in this discussion, I have implied that we should use one data structure for both analysis and generation. But since these are separate phases, it would also be possible to use one structure for analysis and then convert to another structure for generation. This would be a net win if the time saved during generation exceeded the time spent in conversion. 13.10 DebuggingWhen you are debugging a program, and especially if you are working on a hard bug, there are four things to try:
Beginning programmers sometimes get stuck on one of these activities and forget the others. Each activity comes with its own failure mode. For example, reading your code might help if the problem is a typographical error, but not if the problem is a conceptual misunderstanding. If you don’t understand what your program does, you can read it 100 times and never see the error, because the error is in your head. Running experiments can help, especially if you run small, simple tests. But if you run experiments without thinking or reading your code, you might fall into a pattern I call “random walk programming,” which is the process of making random changes until the program does the right thing. Needless to say, random walk programming can take a long time. You have to take time to think. Debugging is like an experimental science. You should have at least one hypothesis about what the problem is. If there are two or more possibilities, try to think of a test that would eliminate one of them. Taking a break helps with the thinking. So does talking. If you explain the problem to someone else (or even yourself), you will sometimes find the answer before you finish asking the question. But even the best debugging techniques will fail if there are too many errors, or if the code you are trying to fix is too big and complicated. Sometimes the best option is to retreat, simplifying the program until you get to something that works and that you understand. Beginning programmers are often reluctant to retreat because they can’t stand to delete a line of code (even if it’s wrong). If it makes you feel better, copy your program into another file before you start stripping it down. Then you can paste the pieces back in a little bit at a time. Finding a hard bug requires reading, running, ruminating, and sometimes retreating. If you get stuck on one of these activities, try the others. 13.11 Glossary
13.12 ExercisesExercise 9 The “rank” of a word is its position in a list of words sorted by frequency: the most common word has rank 1, the second most common has rank 2, etc. Zipf’s law describes a relationship between the ranks and frequencies of words in natural languages2. Specifically, it predicts that the frequency, f, of the word with rank r is:
where s and c are parameters that depend on the language and the text. If you take the logarithm of both sides of this equation, you get:
So if you plot logf versus logr, you should get a straight line with slope −s and intercept logc. Write a program that reads a text from a file, counts word frequencies, and prints one line for each word, in descending order of frequency, with logf and logr. Use the graphing program of your choice to plot the results and check whether they form a straight line. Can you estimate the value of s? |
Are you using one of our books in a class?We'd like to know about it. Please consider filling out this short survey.
|