Recursion is an idea, and it is a programming technique. Calculations and definitions in and out of mathematics both involve recursion, and so it appears in computing as a natural expression of such definitions and calculations.

As others have pointed out [LEDGARD, 1981], a natural and effective definition of the word *descendant* is:

DEFINITION: *A* descendant *of a person is*

* an offspring or*

* the* descendant *of an offspring of the person*.

This definition is said to be recursive because it is in terms of *itself*. Two properties common to recursive definitions make it possible to search a statement to test whether or not a candidate satisfies the requirements for being recursive:

an escape clause ("an offspring") that allows a search to terminate, and

a refresh clause ("the descendant . . . ") that continues the search at a new point if escape is not invoked.

If Lottie has child Leonard has child David, then the check to test Leonard as a descendant of Lottie terminates at the first stage through the escape clause. The check for David as a descendant of Lottie must be refreshed once, continuing the search from Leonard.

Many natural-language words are defined by *indirect* recursion because they are defined in terms of each other. Some, like the word *descendant*, may be defined by direct recursion, in terms of themselves. Dictionaries are usually not so courageous.

Mathematical forms and functions can be recursive:

DEFINITION:N! = 1 forN= 0

= N X [(N - 1)!] for N > 0

DEFINITION:X= 1 for^{k }k= 0

= X X X^{k-1 }for k > 0

Both of these (and most similar examples) are closely tied to one of the pervading ideas of mathematical sciences -- *Mathematical Induction*, *MI* for short. MI is ubiquitous because it is one of the defining properties of the counting numbers 1,2,3 . . . . For the sake of completeness, here is a description of MI:

Suppose that S(n) is a statement that involves the positive integer n:

If1. S(1) is true, and

2. S(k) is true whenever

S(1),S(2), . . . ,S(k-1) are true

thenS(n) is true for all positive integersn.

The *statement* of MI leads naturally to iteration: prove *S*(1), then *S*(2) from it, then *S*(3) from *S*(1) and *S*(2) . . . .

The *proof *that *S*(*n*) is true for all *n *for some statement *S* has the flavor of *recursion: *to prove *S*(*k*), reduce it to statements about *S*(*k* - 1) and perhaps *S*(*k* - 2), . . . ,*S*(1) and by repetition of this, chase the truth of the supporting statements back to *S*(1). The chain of reasoning is then reversed for the flow of results, building back up to *S*(*k*).

A full discussion of MI is outside the scope of this book, but some of it would resemble the discussion of recursion to follow.

The simplest applications of recursion in programming are statements that act as alternates to simple iterations. For example, to find the pointer to the bottom node of a nonempty stack, pointed to by the pointer *p*, either an iterative or a recursive routine will do:

functionChaseTail(p) {iterative

qp

whileq.LinkNILdo

qq.Link

endwhile

returnq

end{ChaseTail

functionUnderDog(p){recursive

ifp .Link = NIL

then return p

else UnderDog( p .Link)

endif

end{UnderDog

*UnderDog* is logically valid because it repeatedly calls itself at a new point until the escape clause is invoked, and the escape clause is bound to occur. *UnderDog* can be implemented because a compiler that allows recursion will translate it into an iterative procedure. Very briefly, for languages in which recursion is legal, the translator of *UnderDog* would build a stack of pointers *p*, *p * .*Link*, *p * .*Link* .*Link*, . . . and then return the last one when the nest of invocations is unraveled. *Underdog *may be slower than *ChaseTail*, the iterated form of the operation, but both are *O*(*n*) for a stack of depth *n*.

Recursion can be applied at a more abstract level as it is in the English language to *define* data structures. For example:

DEFINITION: *A linear linked list is NIL, or it is*

*a node linked to a linear linked list.*

This definition describes a process by which a candidate may be searched for conformation, but it does not describe an algorithm. It does not force the process of refinement to terminate after a finite number of steps.

Figure 7.1 shows how the process described by the definition may be depicted.

A more specific form of the linear list definition is:

DEFINITION: *A linear linked list, LinearList, is either*

*a node*, HeadNode, *linked to a linear linked list*, TaiList.

HEAD(*LinearList*) *HeadNode* and TAlL(*LinearList*) *TaiList*, or, more abstractly:

These can be implemented as (rather abstract) functions:

functionHead(LinearList)functionTail(LinearList)

ifLinearList =NILifLinearList =NIL

then returnNILthen returnNIL

else returnHeadNodeelse returnTaiList

endifendif

end{Headend{Tail

*Problems **P7.1-P7.7 are appropriate at this point.*

7.1.1 From Front to Rear (optional)

The structure STACK of Chapter 4 was defined nonrecursively in terms of operations, yet *UnderDog* is a recursive operation that may be carried out on it. In fact, finding the bottom of a stack, or the last node of an instance of LINLIST, is inherently an operation requiring a repeated search. (The search may be described iteratively or recursively, but it cannot be escaped.) Essentially the same operation appears *recursively *in one description of the interaction between FRONT and INSERT in the structure QUEUE, as:

FRONT(INSERT(Q,i))ifEmpty(Q)

theni

elseFRONT(Q)

endif

It is difficult to describe the relationship succinctly any other way, and the interaction between FRONT and INSERT is independent of implementation.

The trace of this interaction for a particular example demonstrates how recursion can be unraveled. After INSERT(*Q, *10) is applied in Figure 7.2, then FRONT(INSERT(*Q,*10)) is also FRONT(*Q*). FRONT(*Q*) is FRONT (*Q*'*), where *Q*' is the queue to which INSERT(*Q*'*,*-*5) is applied to construct *Q*, as in Figure 7.3.

4 = FRONT(Q\") = FRONT(Q') = FRONT(Q)

Formal definitions of the kind indicated by this section are the subject of current development and research on computer science, under the title of *abstract data structures.*

Any sequence may be described by a function of the positive integers. The ideal function of this kind describes a simple rule by which the *n*th value is to be calculated. For some sequences such as 1,2,4,9,16, . . ., a description as a function of the positive integers can be very direct: *f *(*n*) *= n ^{2}.*

f(1) = 1f(2) = 1

f(n)= f(n -2)+ f(n -1) forn =3,4,5, . . .

This function may be used as a description of how to *derive* the *n*th value. For example:

f(5) = f(3) + f(4)

= [f(1) + f(2)] + [f(2) + f(3)]

= [ 1 + 1 ] + [ 1 + [f(1) + f(2)]]

= 2 + [ 1 + [ 1 + 1 ]]

= 2 + [ 1 + 2 ]

= 2 + 3

= 5

This search for the *n*th value involves only *f *itself--and *f invokes *itself.

2h(n)= h(n) -3n +1

The solution techniques for *f *that are relevant here are the algorithmic ones. For example:

functionFibo1(n) {0(2) TRANSPARENT STACK^{n}

ifn = 1then return1endif

ifn = 2then return1endif

returnFibo1(n - 2)+ Fibo1(n - 1)

end{Fibo1

Bottom-up iteration starts with known or easily found values and builds up to the desired value. A bottom-up Fibonacci calculation is:

functionUpFibo(n) {O(n)NO STACK

Fibo1

OneBack1

fori=3tondo

TwoBackOneBack

OneBackFibo

FiboTwoBack+OneBack

nexti

returnFibo

end{UpFibo

A bottom-up Fibonacci calculation can also be done with the help of a stack in which the top values play the role of *TwoBack, OneBack,* and *Fibo.* For simplicity, assume that a particular stack is accessed by PUSH, POP, DELETE, and TOP without explicitly naming the stack. This *implicit* stack is similar to the stack of the FORTH machine of Part II, section F. The procedure to be developed has the flavor of the FORTH language itself, in which the standard set of stack operations is augmented with others (that can be defined in terms of the standard set). Stack operations DUP (duplicate the top value), ROT (rotate the top three values so the third comes to the top), and SUM (sum the top two values) operate as shown in Figure 7.5.

A procedure for finding *f*(*n*) with these operations is:

functionUpStack(n) {O(n) IMPLICIT STACK

Push(1)

DUP

fori = 3tondo

DUP

ROT

SUM

nexti

returnTop

end{UpStack

In order to calculate *f*(5) in the top-down fashion it is first necessary to calculate *f*(3) and *f*(4) upon which it depends, neither of which can be directly evaluated. In order to calculate *f*(4), *f*(3) is needed again. In one case, *f*(3) is "requested" by *f*(4), and in the other by *f*(5). Evaluation records can be constructed as an integer *k*, a value *f* (set to zero if not yet known), and a link back to the requesting record, shown in Figure 7.6.

Figure 7.7 pictures the calculation of *f*(5)*.*

With the requests reduced to this form, the values of higher nodes can be determined from lower nodes, as shown in Figure 7.8, and finally reaching the calculation in Figure 7.9.

A direct expression of this scheme as a data structure must be deferred to Chapter 8; but, with the help of a stack, the evaluation records can be saved as they are generated and discarded as they are used. Using levels in the stack as links, *f*(*n*) can be calculated by repeating:

**1.** If the *f*-value of the top record is not zero and the level is 1, then return that value.

**2.** If the *f*-value of the top record is zero, then generate two new records (for *k* - 1 and k - 2) and stack them, linking them back to the record that led to their generation.

**3. **If the *f*-value is not zero for the top value but is zero for the next-to-top value, then generate two new records from the latter.

**4. **If the *f*-value is not zero for the top two values, then sum them and add the sum to the value to which they are linked. (It happens that they *will* be linked to the same source of their generation because records are generated in pairs.)

A trace of this process for *f*(5) is shown in Figure 7.10.

procedureNewRecord (kValue,Level)

TopTop+ 1

ifkValue< 3

thenf[Top] 1

elsef[Top] 0

endif

k[Top]kValue

Link[Top]Level

end{NewRecord

A record at index *level* can be split to produce two new records on top of the stack by:

procedureSplit(Level)

kValuek[Level]

NewRecord(kValue-1,Level)

NewRecord(kValue-2,Level)

end{Split

With these operations defined, a top-down iterative calculation of the *n*th Fibonacci number is:

functionDownStack(n) {0(2) EXPLICIT STACK^{n}

ifn3then return1endif

Top0

NewRecord(n,0)

repeat

iff[Top] = 0

thenLevelTop{split top record

Split(Level)

elseiff[Top-1] = 0

thenLevelTop-1 {split next-to-toprecord

Split(Level)

elseLevelLink[Top] {sum top values

Sumf[Top] +f[Top-1]

f[Level]Sum

TopTop-2

endif

endif

untilTop= 1

returnf[1]

end{DownStack

A compiler that translates the recursive function subprogram *Fibol* would generate *activation records* for each call--activation records that would resemble the evaluation records of *DownStack.* Note that the "*n*" in the definition of *Fibol *keeps changing with each invocation. If the records of *DownStack* are taken to be the actual activation records, the second stack snapshot depicts three values of *n:* *n* = 3,4, and 5 stored in *k*[*Level*] for *Level* = 3,2, and 1.

The availability of some language features, however, may complicate the activation records (and their generation). Even the relatively simple PL/0 compiler that generates p-code (section 4.4) must take into account the block structure of the language. Specifically, an identifier may be redefined within a subprogram, where use of the identifier has a limited scope. It is not the same variable as one named identically elsewhere. The linkage patterns within the activation stack must trace variable references back to the location of their values within the stack. That location is determined by the sequence of procedure calls during execution, in conjunction with the effects of identifier scope.

The complex problems associated with the implementation of recursive routines like *Fibol* can be left to language design and compiler writing. What is not to be left to them is: How can recursion be used as a design tool? When *should* it be used as a program design?

There is no definite answer to these questions, but--as a rule of thumb--if recursion is a natural expression of a problem solution and execution-time is not crucial, then use it.

The search through an ordered list for an arbitrary element is a common operation in computing. If output is ordered, then a search through it is efficient--the justification for sorting in the first place. The output of a program ultimately is ordered for the convenience of a search by a human or another program that accepts it as input. Sorting is justified by searching, and searching is made cost-effective by sorting.

One not very efficient sequential search is LSD (for Linear Search--Dumb):

functionLSD(Key,Max,Lost) {O(n)

Here0

fork1toMaxdo

IfKey[k] =Lost

thenHerek

endif

nextk

returnHere

end{LSD

functionLSE(Key,Max,Lost) {O(n)

Key[Max+1]Lost

k1

whileKey[k]Lost

kk+1

endwhile

returnk

end{LSE

A natural and recursive way to look for *Lost* among the values in *Key*[1 *. . Max*] stored in nondecreasing order is:

functionIBS(Key,Max,Lost) {O(lnn) ITERATIVE BINARY

Lower1SEARCH

UpperMax

whileLowerUpperdo

Mid(Lower+Upper) DIV 2

case

Lost>Key[Mid]:LowerMid+1

Lost=Key[Mid]:returnMid

Lost<Key[Mid]:UpperMid-1

endcase

endwhile

return0

end{IBS

It is worthwhile to trace this algorithm on a simple example.

*Exercise **E7.1 is appropriate at this point.*

The recursive version of binary search needs to have *Lower* and *Upper* passed to it, since it operates on a *segment* of an array, not on an entire array. This is a fairly common effect of the change from an iterative to a recursive routine. Some local variables become parameters because they must change with each invocation of the routine. One version, initially invoked as *RBS(Key,1,Max,Lost), *is:

functionRBS(Key,Lower,Upper,Lost) {O(lnn) RECURSIVE BINARY

ifLowerUpperSEARCH

thenMid(Lower+Upper) DIV2

case

Lost>Key[Mid]:RBS(Key,Mid+1,Upper,Lost)

Lost= Key[Mid]:returnMid

Lost<Key[Mid]:RBS(Key,Lower,Mid-1,Lost)

endcase

else return0

endif

end{RBS

In one sense, *RBS* is more restrictive than *IBS* because it involves more parameters in the initial call. In another sense, *RBS* is less restrictive because it can be used to search *any segment* of an array, whereas *IBS* must search an initial segment. The latter can be easily generalized to do the same thing and both provide the same service.

More run-time overhead can be expected for *RBS* than for *IBS* because of the stacking of activation records, but both are still *O*(ln *n*) algorithms. Otherwise, for large *n, RBS* may use a great deal of space on the stack.

The *idea* of binary search is recursive, but the implementation does not have to be. The idea of dividing a data set into segments and dealing with each of them recursively has been generalized to an entire class of algorithms--the general technique is called *Divide-and-Conquer* (D-and-C). Among the D-and-C algorithms are sorting algorithms, one of which is the subject of section 7.5.

*Exercise E7.2 is appropriate at this point.*

A rich variety of stories have been formed about an apparently ancient puzzle:

procedureTowers(n,Source,Target,Helper)

ifn= 0then return endif

Towers(n - 1,Source,Helper,Target)

MoveOne(Source, Target)

Towers(n - 1,Helper,Target,Source)

end{Towers

*Exercise **E7.3, problems P7.1-P7.7, and program PG7.1 are appropriate at this point.*

A number of sorting techniques involve exchanging one value (or node) for another. An obvious source of inefficiency with exchanges is that elements of a file to be sorted are exchanged more than once during the sorting process. If it were possible to exchange an element directly into its proper position (ignoring the chaos in the rest of the file at that point), then *n* such operations would sort the file. It would be an *O*(*n*) process.

PropertyP(k):

Key[i]Key[k] for 1ik

Key[k]Key[j] forkjMax

Property *P*(*k*) for some *k* can be true with otherwise arbitrary order, and so provide a subgoal on the way to the sorting of *Key*. For example, Figure 7.11 has *P*(4), but not *P*(*k*) for other values of *k*, and this sequence is not sorted.

If P(k) is true for every k, 1 k Max, then Key is necessarily sorted.

A recursive sort called *QuickSort* can be developed by repeatedly establishing *P*(*k*) within the subrange *Key*[*i*] . . *Key*[*j*]. The details of *QuickSort* are to be found in Part II, Section O, where it is developed and implemented in Pascal.

Simply because of aesthetics some problems have become part of the repertoire of nearly everyone who is acquainted with computer science beyond an elementary programming course. Either the problem itself or its solution is elegant. One such problem is:

Place eight chess queens on a chess board so that they do not threaten each other.

A chess queen can capture another piece by moving any number of squares along one of the eight rows or eight columns or any 45-degree diagonal. Eight queens is an upper bound because no two queens can occupy the same column (or row). A solution places the *n*th queen in column *n* of the board and determines the row, *Queen*[*n*], of the *n*th queen so that it controls a row and diagonals distinct from those on which the other queens lie. It is possible to do this iteratively [WIRTH, 1971]. The solution presented here is recursive.

Suppose that procedure *Remainder* is designed so that *Remainder*(*n*) places the queens from the *n*th one through the eighth one to solve the problem, if possible, and also sets *TruePath* to *TRUE* if it succeeds and to *FALSE* otherwise. In particular, if *TruePath* is set *TRUE* by *Remainder*(1), the problem is solved. A solution is:

programEightQueens

SetUp

Remainder(1)

ifTruePath

thenDisplayBoard

else"No solution was found."

endif

end{EightQueens

If the procedure call *Remainder*(*n*) is made with *n* > 8, then eight queens have been successfully placed, and it returns with *TruePath* = TRUE. When entered with n 8, *Remainder* makes a procedure call *NextRow*(*i,j*) that searches for an available row beginning at row *i*. Initially *NextRow* is called with *i* = 0 and *j = n* and looks for the available row of lowest index, then documents the placing of queen *n* on that row and column *n*. To check the ultimate suitability of that choice, *Remainder*(*n* + 1) is called, which of course makes *Remainder* recursive. If it fails (*TruePath* = FALSE), then the row chosen by the call *NextRow*(*i,n*) is canceled (with procedure *Cancel*) and a row of greater index is attempted:

procedureRemainder(n)

TruePathFALSE

ifn>8

thenTruePathTRUE

return

endif

RowNextRow(0,n)

while(Row8) AND (NOTTruePath)do

Remainder(n+1)

ifNOTTruePath

thenCancel(Row,n)

RowNextRow(Row,n)

endif

endwhile

end{Remainder

The function called *NextRow*(*i,j*) simply steps through the rows *i, i* + 1, . . . , *j* and checks to see if they are safe, meaning unoccupied, with a function *Safe:*

functionNextRow(Row,Column)

repeat

RowRow+1

untilSafe

ifRow8thenDocumentendif

returnRow

end{NextRow

**Note**: Cancellation is a form of *backtracking*--a retraction of a partially developed solution in order to try another direction. Backtracking is a general approach to solving problems often applied to binary trees and to trees, discussed in Chapters 8 and 9.

At this point in the development of *EightQueens*, it is no longer possible to avoid the details of how to mark and detect the presence of a queen in the rows and diagonals controlled by it.

A queen in row *i*, column *j*, also controls two diagonals, one tilting 45 degrees to the right from vertical, and one tilting 45 degrees to the left. From the view of placing another queen, it does not matter *which* position in a row, column, or diagonal is occupied by another queen. Rows and columns are characterized by their index. The fifteen distinct diagonals that tilt to the right are characterized by positions for which *i + j* is constant and has one of the values: 2,3, . . . , 16. (A move of one step up and to the right on a right-tilted diagonal decreases the row by one and increases the column by one.) Similarly, the fifteen diagonals that tilt left are characterized by *i - j* (in the set - 7, . . . , 7).

Figure 7.12 shows the row, column, and diagonals controlled by a queen at row 5, column 6. No information about the occupation of columns is needed since columns are possessed in sequence. Controlled rows and columns are marked in Boolean variables *RowMark*, *TiltRight*, and *TiltLeft*, all initialized to *FALSE* to indicate that they are not controlled by any queen. Documentation of an occupation is performed by setting the appropriate cells to *TRUE*, cancellation by resetting them to *FALSE*, and checking for safety by insuring that the appropriate row and diagonals are all *FALSE*:

functionSafe

ifRow>8

then returnTRUE

else returnNOT (RowMark[Row]

ORTiltRight[Column+Row]

ORTiltLeft[Column-Row])

endif

end{Safe

procedureDocument

Queen[Column] Row

RowMark[Row] TRUE

TiltRight[Column+Row] TRUE

TiltLeft[Column-Row] TRUE

end{Document

procedureCancel(Row,Column)

RowMark[Row] FALSE

TiltRight[Column+Row] FALSE

TiltLeft[Column-Row] FALSE

end{Cancel

*Program **PG7.2 and project PJ7.1 are appropriate at this point.*

The records of a general list (graph) structure may contain any number of link-fields, and the links may form a tangle of great complexity. With just two link-fields a small number of nodes may form a pattern that is not any structure studied in the previous chapters--for example the pattern in Figure 7.13.

It is necessary to visit every node in such a structure to search within it, or copy it, or process the data field(s) in any way. Ways to traverse graphs based on the data structures of Chapters 8 and 9 are treated in Chapter 10. In this section, the structure is modeled by its own form.

A general record format that will serve for such a structure is:

aNode= RECORD

Data: DataType

Link: ARRAY[1 .. Max] OF aNode

END {aNode

A recursive process that visits every node of a structure and processes it is:

procedureRvisit0(p){Recursive Visit 0

ifpNIL

thenProcess(p)

fori=1toMaxdo

Rvisit0(Link[i])

nexti

endif

end{Rvisit0

When* Rvisit0* is applied to this example with *p* initially pointing to node 1, it will process nodes 1, 2, 3, 5, 6, 4, 5, 6. The multiple processing of nodes (such as 5 and 6 in this example) can cause severe problems for a user as well as inefficiency. A more serious problem is that if a link-field of node 6 pointed to node 1, it would put *Rvisit0* into an infinite loop. One remedy for both problems is to introduce a *Mark* field set to *TRUE* for a node not yet processed and *FALSE* otherwise:

Node= RECORD

Data:datatype

Mark: BOOLEAN

Link: ARRAY[1..Max] OFNode

END{Node

With this change, a recursive visit algorithm that processes each node only once is:

procedureRvisit(p) {Recursive Visit

if(pNIL) AND (p.Mark)

thenProcess(p)

p.MarkFALSE

fori=1toMaxdo

Rvisit(Link[i])

nexti

endif

end{Rvisit

The advantage of recursive algorithms such as *Rvisit* is that they tend to be much simpler and more easily developed than equivalent iterative algorithms. Nevertheless, they do have shortcomings.

One problem with *Rvisit* itself is that if the mark-fields are all initially *TRUE,* then after one execution of *Rvisit,* they are all *FALSE;* and it cannot be reapplied. A separate (and similar) routine is required to complement the values of the *Mark* field, or *Rvisit* must be changed to allow p .mark to be compared to a Boolean argument that can then be alternated between *TRUE* and *FALSE*.

A more severe problem with *Rvisit* and other recursive routines is that if a large number of nodes are involved, the run-time stack may become quite large, for it contains a copy of the information needed to continue the process for each currently unresolved link.

It is possible to derive an equivalent iterative algorithm from any recursive algorithm, but it is an awkward process and the result often benefits from rewriting. More importantly, the translation is essentially the building of *explicit* stacks to replace the *implicit* stacks that allow recursion to work.

An alternative to *Rvisit* that is not recursive or the explicit-stack translation of a recursive routine was discovered by Deutsch, Schorr, and Waite. A structured version, *Visit,* is presented in Part II, section M.

*Project PJ7.2 is appropriate at this point.*

A language used extensively in the field of artificial intelligence, and hence of increasing importance, is LISP. It is a working language becoming more widely available; a number of introductory books treat LISP programming. (One such is [TOURETSKY, 1983].) LISP combines a number of views about computation that give it a quality drastically different from Algol-derivations (Pascal-like languages). These include the central importance of applying functions and the use of predicates--statements with Boolean values. This section is only intended to point out how LISP is related to two themes of importance in this book: linked lists and recursion.

The central objects in a LISP program are lists, including functions that operate on them and the definitions of the functions. In the usual syntax, lists are enclosed in parentheses, and each pair of parentheses forms a list:

ITEM is not a list, but an *atom*.

(ITEM) is a list and has the effective structure shown in Figure 7.14

Similarly, (TWO ITEMS) has the structure shown in Figure 7.15.

The structure of ((TWO ITEMS)) is depicted in Figure 7.16.

And (((TWO ITEMS))) has the structure shown in Figure 7.17.

Storage efficiency is gained by saving atoms only once, as illustrated in Figure 7.18.

One list may be the atom of another, shown in Figure 7.19.

CAR returns the first atom of a list.

CDR returns the tail of a list--all but the first atom.

CONS returns a list constructed from an atom and a list.

LIST returns a list constructed from atoms.

The action of these are illustrated in Figure 7.20.

*Project **PJ7.3 is appropriate at this point.*

Recursion is a fundamental way to express some relationships. When available, it can be a very convenient and delightfully succinct form for the expression of procedures.

The foundation on which language translators build recursion is a stack of activation records. Activation records are linked together on the "stack" in (possibly) interwoven lists that trace the value determinations of variables to their appropriate sources. (The p-Machine of section 4.4 and Part II, section G, is a relatively simple example of how this can be done.)

Recursion can be a marvelously easy form within which to structure an algorithm. It can also be wasteful of time and space, as illustrated by various ways to calculate the Fibonacci sequence.

The values of recursion are convenience, saving of programming time, and the development of an understanding of a problem. These may be traded for execution efficiency at the cost of the time and care required to develop an iterative solution.

Some examples in which recursion is generally deemed to be worth the runtime cost are: binary search, the Towers of Hanoi, *QuickSort*, and the Eight-Queens problem. (Other ways to traverse a general list are to be found in Chapter 10 and Part II, section M.)

*QuickSort *is frequently the sorting method of choice in applications because its average-case behavior is *O*(*n *ln *n*). *QuickSort *is an example of a recursive algorithm that tends to execute more efficiently than most iterative rivals.

The Eight-Queens solution is an example of *backtracking*, a programming technique of general utility.

LISP is a language that incorporates recursion and list processing into the mold of thought that it promotes in programmers.

Above all, recursion is a choice for problem solving that should be available to any programmer.

Exercises immediate applications of the text material

**E7.1 **Trace the procedure calls *IBS*(*Key*,6,8) and *IBS*(*Key*,6,5) when *Key *contains these values: *Key*[1] = 1, *Key*[2] = 3, *Key*[3] = 4, *Key*[4] = 6, *Key*[5] = 8, *Key*[6] = 10.

**E7.2 **Trace the calls of procedure *RBS*(*Key*, 1,6,8), *RBS*(*Key*, 1,6,5), and *RBS*(*Key*,3,6,5) with the values of *Key *given in E7.1.

**E7.3 **Trace the stacks representing the pegs in procedure *Towers *of section 7.4 for *n *= 4. (PG7. 1 asks for a program that does this.)

Problems not immediate, but requiring no major extensions of the text material

Write recursive versions of problems P7.1--P7.4:

**P7.1 **Search a singly linked list for a given node value.

**P7.2 **Search a circularly linked list for a given node value.

**P7.4 **Calculate the sum of the first *N* integers.

**P7.5 **(Difficult) Develop an iterative version of the Towers of Hanoi solution given in section 7.4.

**P7.6 **Find the GCD (Greatest Common Denominator) of two values *N *and *M *as GCD(*N*,*M*), calculated as either of them if *M *= *N*, otherwise the GCD of the smallest of them and their difference. (Use a recursive procedure.)

**P7.7 **Find the determinant of an *N *by *N *matrix. The determinant of *A *can be calculated as *A*[1,1] for *N *= 1 and the sum from *j *= 1 to *N *of (( -- 1)* ^{i}*+

Programs for trying it yourself

**PG7.1 **Implement procedure *Towers *of section 7.4 with a trace of the stacks. Run it for *n *= 4,5, and 6.

**PG7.2 **Implement the Eight-Queens solution of section 7.6 with a display routine that forms a board representation with queen markers in place.

Projects for fun and for serious students

**PJ7.1 **Write a program to find *all *solutions to the Eight-Queens puzzle.

**PJ7.2 **Implement *Rvisit *of section 7.7.

**PJ7.3 **Write a command-shell program to evoke functions CAR, CDR, CONS, LIST, DISPLAY, INSERT, and DELETE. The last two commands allow insertion and deletion at *any *depth in a list. The program is to implement DISPLAY, INSERT, and DELETE internally with the *other* command* *functions.

Comment: You may wish to implement the one-cell-per-token rule as a separate project.