Wonderous numbers and other diversions. W. Lloyd Milligan.
Tortoise: Let me. . . show you a property which is very easy to define, and yet for which no terminating test is known. I'm not saying there won't ever be one discovered, mind you--just that none is known. You begin with a number--would you care to pick one?
Just a few pages before the end of Douglas Hofstadter's marvelous book, Godel, Escher, Bach: An Eternal Braid, the Tortoise tells Achilles about "wondrous" numbers.
Begin with any whole number. If it is odd, multiply it by 3 and add 1. If it is even, divide it by 2. Continue in the same way with the resulting number. If applying this process repeatedly eventually brings you to 1, then the number you started with is a wondrous number.
The curious thing about the wondrous property is that you can never be sure (by applying this algorithm) that a number is not wondrous.
The Tortoise leaves Achilles a puzzle:
Tortoise: Why don't you try starting with 27? Mind you, I don't promise anything. But sometime, just try it, for your amusement. And I'd advise you to bring along a rather large sheet of paper.
Perhaps one sort of "rather large sheet of paper" is the personal computer. And who having such a sheet of paper in hand could resist the Tortoise's challenge?
You first need a number, so 10 INPUT N
IF it is odd, multiply it by 3 and add 1. 20 IF N/2 < > INT(N/2) THEN N=N*3 + 1
If it is even, divide it by 2. 30 IF N/2 = INT(N/2) THEN N=N/2
If you have reached 1, then stop and declare the number to be wondrous. 40 IF N=1 THEN STOP
If not, repeat the entire process. 50 GOTO 20
This five-step program works as intended; however, the reader is cautioned that on some trips through the loop the value of n changes twice!
It would be interesting to count how many iterations of the wondrous process are required to make a number wondrous, and, in particular, to make 27 wondrous. To this end we sharpen and decorate our program a bit (see Listing 1). You will want to verify this for yourself, but when I asked my program whether 27 was wondrous, it told me that 27 is a won-won-wondrous number. Primes and Polynomials
Martin Gardner offers the following problem. Starting with the prime number 41, add 2 to get another prime 43. Then add 4 to get 47.47 + 6 is 53 and 53 + 8 is 61. So far there have been nothing but primes. Does this procedure always yield primes?
To answer this question, we procure a rather large sheet of paper, and proceed to write a small program (see Listing 2). We may be satisfied on the first run to examine 10,000 terms of the sequence, or else be prepared to wait a while.
As it happens we won't have to wait long. Because the forty-first term in the sequence turns out to be composite. But!--wasn't 41 the number we started with? Is this some kind of coincidence?
Actually not. Let's look at another way of characterizing the sequence. The kth term is given by (1) k.sup.2 + k + 41
This is easy to see, if you recall that the sum of the first k positive integers is k. (k + 1)/2. The first k even numbers is twice the sum of the first k positive numbers Clearly, if we set k=41, expression (1) factors as 41.(41+1+1).
Indeed for any prime p, (2) k.sup.2 + k + p is divisible by p whenever k = p. This result is trivial, but it points in the direction of a less trivial one. You might imagine that some sufficiently complex polynomial (3) a.sub.0.+a.sub.1.k+a.sub.2.k.sup.+...+a.sub.n.k.sup.n is a formula for prime numbers. That this cannot be so was first shown by the Swiss mathematician Leonhard Euler (1707-1783). See note [1].
The sufficiently assiduous reader (Hofstadter's term) has probably noticed that the key to the method of Listing 2 is that k.sup.2 + k + c differs from (k-1).sup.2 + (k-1) + c by exactly 2.K. There is a more general method of evaluating polynomials which is of interest. Expression (3) can be re-written as follows: (4)k.(...k.(k.(a.sub.n..k + a.sub.n-1.) + a.sub.n-2.) + ... + a.sub.1.) + a.sub.0 I first learned of this method of evaluating polynomials from the Hewlett-Packard HP-55 mathematics program handbook. This numerical method works especially well with H-P's stack-oriented "reverse Polish notation" pocket calculators, but is useful in a variety of applications.
A fairly easy programming exercise is to write a routine for evaluating a polynomial, given an ordered n + 1 tuple of coefficients and some fixed value x as input. Such a routine can be used, for example, to compute the sum of the first n kth powers using commonly available formulas. Counting Things
The first mathematical thing that most of us learn to do is count. This is as far as many people ever get. For this reason, mathematicians have set aside a special branch of their field called "combinatorial analysis." Ironically some of the most difficult mathematical problems involve counting--it is from the counting difficulty, for example, that many finite probability problems inherit their treachery.
Helpful formulas to assist with counting things abound. For example, the number of distinct subsets of a finite collection of n objects is 2.sup.n.. To prove this, imagine that each object is labelled 1, 2, . . . , n. We associate with each subset a binary number using the following rule of correspondence: If the object labelled k is a member of the subset, then the kth digit of the binary number is 1; otherwise it is 0. Clearly, there are exactly 2.sup.n such distinct binary numbers, and therefore 2.sup.n subsets.
The number os subsets of size k in a collection of n objects (k [is less than or =] n) is (5) n!/k!(n-k)!
Expression (5) is called the combinations formula because it stands for the number of combinations of n things taken k at a time. It is also equivalent to the coefficient of the kth term in the expansion of the binomial (a+b).sup.n.. The binomial coefficient is usually written (nk). Since the total number of subsets in a set of size n must equal the sum of all subsets of size k for each k = 0, 1, . . . , n; we have
Can you think of a simple direct proof of expression (6)? (See note [2].)
Unfortunately, not every counting problem is easily reduced to a simple formula as in the preceding examples. For some problems, however, it is possible to exhibit counting algorithms. Several such Fortran programming examples, some quite technical, are given in the book Combinatorial Algorithms by Albert Nijenhuis and Herbert Wilf. Counting Ties at Pool
The game of pool is usually played with 15 balls numbered 1 to 15. For present purposes, assume that there are two players.
One variant, called "rotation," is scored by adding the values of all the balls pocketed by each player. To win this game a player must score 61 or more points. Sixty points to each player constitutes a tie-game. In general, ties are possible if and only if the total number of balls is 4.k or 4.k-1, where k = 1,2,3,. . .
It is apparent that the number of distinct ways in which rotation can be tied is even. For any particular tie-distribution of balls, exchanging the identity of the players yields a complementary tie-game.
Another way of expressing this symmetry principle is to say that for any particular ball, the number of tie-distributions in which the ball belongs to one player is equal to the number of ties in which the ball was scored by the other player.
When I first thought of the problem of counting ties at pool (rotation), I hoped that the symmetry concept would generalized very quickly to a formula for ties whose argument is the number of balls. If such a formula exists, however, I have not been able to find it.
To get an idea of how to design an algorithm for counting ties, it is helpful to consider how you would enumerate tie-distributions exhaustively using pencil and paper. The task is to be sufficiently systematic as to ensure against repetition and, at the same time, to permit no accidental omissions. With the 15-ball pool game this is surprisingly difficult.
However, with a smaller number the task is accomplished more easily. Table 1 enumerates all tie-distributions which inclube the 8-ball in a hypothetical game played with eight balls. (Let's call this the order-8 game.) A tie-score is obtained in the order-8 game when each player scores exactly 18 points.
Seven distributions are shown in the table. Taken together with the seven complementary distributions, there are 14 ways to tie in all.
There is a pattern in Table 1, but it is somewhat difficult to describe. Moreover, care is essential if the description is to form the basis of a general algorithm.
Listing 3 presents a program for counting ties in the order-n game. The actual counting part of this program is contained in the 12 lines numbered 70 through 180. The algorithm is based on the pattern in Table 1. Before studying Listing 3 you may wish to design your own counting program.
The value of a tie core in the order-n game is computed at line 30, While line 40 tests to see that a tie game is indeed possible.
It is evident that in an order-n game, neither player can pocket more than n balls. So at line 50 we dimension an array to contain values of the balls pocketed in tie trys including the n-ball (see line 70). There is one trival case (n=3) in which the vlaue of a tie score is equal to n. This case is disposed of at line 60.
As was shown in the preceding section there are 2.sup.n subsets of a set of n elements. Thus there are 2.sup.n total different games of order-n rotation neglecting the order in which balls are pocketed by the players. For the usual order-15 game this is the familiar number 32,768.
As in Table 1, the program uses the principle that the number of ties with the n-ball equals the number of ties without it. Line 70 pockets the n-ball.
The counting loop structure is somewhat complicated because the index variable is altered within the loop as well as by the NEXT statements. The value of I denotes the number of balls pocketed in the current try for a tie-distribution.
To understand how the counting algorithm works, it is helpful to trace the execution of the program for at least one row of the 8-ball problem. Unfortunately this analysis is too detailed to include here.
On first entering the FOR loop, the n-1 ball is pocketed. The value of each pocketed ball is set up experimentally at line 100. The loop (and program) terminating test is performed at line 110.
Whenever the last ball pocketed has an invalid value (<1), "unpocket" it and decrement the value of the previous ball (lines 120-130). When the last ball is valid, line 150 transfers control to a subroutine (line 250) which computes the score for each try.
If a tie is found (line 160), the subroutine at line 300 increments the tie counter and prints the distribution. Printing distributions greatly enhances the interest of the program. Finally, line 170: decides whether to decrement the value of the current ball or to pocket another ball for the next try. Summary
This article has described three recreational programming problems. Recreational problems are helpful in developing analytical and programming skills. Their real appeal, though, is in their capacity for discovery. I have tried to share the sense of enjoyment one experiences on finding something unexpected or curious. Who knows whether the next problem may lead to a truly interesting finding? Footnotes
[1] Obviously, not every polynomial P is divisible by its constant term. But, if N is an integer such that P(N) = M, then it is easily
shown that for any integer k, M divides P(N+k.M). Since P cannot
assume any given value more than n times, where n is the degree of P, there is to be a value of P(N+k.M) distinct from [plus-or-minus] M for some choice of k.
[2] Since (.sup.n./.sub.k.) is the binomial coefficient, sup.n.SIGMA (.sup.n./.sub.k.) = (1 + 1).sup.n = 2.sup.n. k = 1