By using -2 as the base, both positive and negative integers can be expressed without an explicit sign or other irregularity such as having a negative weight for the most significant bit (Knu3). The digits used are 0 and 1, as in base +2; that is, the value represented by a string of 1's and 0's is understood to be

From this, it can be seen that a procedure for finding the base -2, or "negabinary," representation of an integer is to successively divide the number by -2, recording the remainders. The division must be such that it always gives a remainder of 0 or 1 (the digits to be used); that is, it must be modulus division. As an example, the plan below shows how to find the base -2 representation of -3.

Because we have reached a 0 quotient, the process terminates (if continued, the remaining quotients and remainders would all be 0). Thus, reading the remainders upwards, we see that -3 is written 1101 in base -2.

Table 12-1 shows, on the left, how each bit pattern from 0000 to 1111 is interpreted in base -2, and on the right, how integers in the range -15 to +15 are represented.

It is not obvious that the 2^{n}
possible bit patterns in an n-bit word uniquely
represent all integers in a certain range, but this can be shown by induction.
The inductive hypothesis is that an n-bit word
represents all integers in the range

Assume first that n is even. For n = 2, the representable integers are 10, 11, 00, and 01 in base -2, or

This agrees with (1a), and each integer in the range is represented once and only once.

A word of n + 1 bits can, with
a leading bit of 0, represent all the integers given by (1a). In addition, with
a leading bit of 1, it can represent all these integers biased by (-2)^{n} = 2^{n}.
The new range is

or

This is contiguous to the range given by (1a), so for a word size of n + 1 bits, all integers in the range

are represented once and only once. This agrees with (1b), with n replaced by n + 1.

The proof that (1a) follows from (1b), for n odd, and that all integers in the range are uniquely represented, is similar.

To add and subtract, the usual rules, such as 0 + 1 = 1 and 1 - 1 = 0, of course apply. Because 2 is written 110, and -1 is written 11, and so on, the following additional rules apply. These, together with the obvious ones, suffice.

When adding or subtracting, there are sometimes two carry bits. The carry bits are to be added to their column, even when subtracting. It is convenient to place them both over the next bit to the left, and simplify (when possible) using 11 + 1 = 0. If 11 is carried to a column that contains two 0's, bring down a 1 and carry a 1. Below are examples.

牋牋牋 Addition牋牋牋牋牋牋牋牋牋?Subtraction

?1 1 11牋?11牋牋牋牋牋牋牋 1 11?1牋牋 1

牋牋1?0?1?1?1牋?19牋牋牋牋牋?1?0?1?0?1牋牋 21

+ 1 1?0?1?0?1 +(-11)牋牋?- 1?0?1?1?1?0?-(-38)

?--------------- ------牋牋牋?----------------- -----

0?1?1?0?0?0牋牋 8牋牋?1?0?0?1?1?1?1牋牋 59

The only carries possible are 0, 1, and 11. Overflow occurs if there is a carry (either 1 or 11) out of the high-order position. These remarks apply to both addition and subtraction.

Because there are three possibilities for the carry, a base -2 adder would be more complex than a two's-complement adder.

There are two ways to negate an integer. It may be added to itself shifted left one position (that is, multiply by -1), or it may be subtracted from 0. There is no rule as simple and convenient as the "complement and add 1" rule of two's-complement arithmetic. In two's-complement, this rule is used to build a subtracter from an adder (to compute A - B form ). There does not seem to be any such simple device for base -2.

Multiplication of base -2 integers is straightforward. Just use the rule that 1 x 1 = 1 and 0 times either 0 or 1 is 0, and add the columns using base -2 addition.

Division, however, is quite complicated. It is a real challenge to devise a reasonable hardware division algorithm梩hat is, one based on repeated subtraction and shifting. Figure 12-1 shows an algorithm that is expressed, for definiteness, for an 8-bit machine. It does modulus division (nonnegative remainder).

int divbm2(int n, int d) {牋牋牋牋 // q = n/d in base -2.

牋爄nt r, dw, c, q, i;

牋 r = n;牋牋牋牋牋牋牋牋 牋牋牋牋?/ Init. remainder.

牋燿w = (-128)*d;牋牋牋牋牋牋牋牋?// Position d.

牋燾 = (-43)*d;牋牋牋牋牋牋牋牋牋?// Init. comparand.

牋爄f (d > 0) c = c + d;

牋爍 = 0;牋牋牋牋牋牋牋牋牋牋牋牋?// Init. quotient.

牋爁or (i = 7; i >= 0; i--) {

牋牋?if (d > 0 ^ (i&1) == 0 ^ r >= c) {

牋牋牋牋 q = q | (1 << i);牋牋牋牋 // Set a quotient bit.

牋牋牋牋爎 = r - dw;牋牋牋牋牋牋牋 // Subtract d shifted.

牋牋牋}

牋牋牋dw = dw/(-2);牋牋牋牋牋牋牋?// Position d.

牋牋牋if (d > 0) c = c - 2*d;牋牋?// Set comparand for

牋牋牋else c = c + d;牋牋牋牋牋牋?// next iteration.

牋牋牋c = c/(-2);

牋爙

牋爎eturn q;牋牋牋牋牋牋牋牋牋牋牋 // Return quotient in

牋牋牋牋牋牋牋牋牋牋牋牋牋牋牋牋牋?/ base -2.

牋牋牋牋牋牋牋牋牋牋牋牋牋牋牋牋牋?/ Remainder is r,

}牋牋牋牋牋牋牋牋牋牋牋牋牋牋牋牋?// 0 <= r < |d|.

Although this program is written in C and was tested on a
binary two's-complement machine, that is immaterial梚t should be viewed
somewhat abstractly. The input quantities `n`
and `d`, and all internal variables
except for `q`, are simply numbers
without any particular representation. The output `q` is a string of bits to be interpreted in base -2.

This requires a little explanation. If the input quantities
were in base -2, the algorithm would be very awkward to express in an
executable form. For example, the test "`if`
`(d` `>` `0)`"
would have to test that the most significant bit of `d` is in an even position. The addition in "`c` `=`
`c` `+` `d`"
would have to be a base -2 addition. The code would be very hard to read. The
way the algorithm is coded, you should think of `n` and `d` as
numbers without any particular representation. The code shows the arithmetic
operations to be performed, whatever encoding is used. If the numbers are
encoded in base -2, as they would be in hardware that implements this
algorithm, the multiplication by -128 is a left shift of seven positions, and
the divisions by -2 are right shifts of one position.

As examples, the code computes values as follows:

divbm2(6, 2) = 7 (six divided by two is 111_{-2})

divbm2(-4, 3) = 2 (minus four divided by three is 10_{-2})

divbm2(-4, -3) = 6 (minus four divided by minus 3 is 110_{-2})

The step `q` `=` `q`
`|` `(1` `<<` `i);` represents simply setting bit i of `q`.
The next line?`r` `=` `r`
`-` `dw`梤epresents reducing the remainder by the divisor `d` shifted left.

The algorithm is difficult to describe in detail, but we will try to give the general idea.

Consider determining the value of the first bit of the
quotient, bit 7 of `q`. In base -2
8-bit numbers that have their most significant bit "on" range in
value from -170 to -43. Therefore, ignoring the possibility of overflow, the
first (most significant) quotient bit will be 1 if (and only if) the quotient
will be algebraically less than or equal to -43.

Because n = qd + r and for a positive divisor r d - 1, for a positive divisor the first quotient bit will be 1 iff n - 43d + (d - 1), or n < - 43d + d. For a negative divisor, the first quotient bit will be 1 iff n -43d (r 0 for modulus division).

Thus, the first quotient bit is 1 if

Ignoring the possibility that d = 0 this can be written as

where c = - 43d + d if d > 0, and c = - 43d if d < 0.

This is the logic for determining a quotient bit for an
odd-numbered bit position. For an even-numbered position, the logic is
reversed. Hence the test includes the term `(i&1)`
`==` `0`. (The `^`
character in the program denotes exclusive or.)

At each iteration, `c`
is set equal to the smallest (closest to zero) integer that must have a 1-bit
at position `i` after dividing by `d`. If the current remainder `r` exceeds that, then bit `i` of `q`
is set to 1 and `r` is adjusted by
subtracting the value of a 1 at that position, multiplied by the divisor `d`. No real multiplication is required
here; `d` is simply positioned
properly and subtracted.

The algorithm is not elegant. It is awkward to implement because there are several additions, subtractions, and comparisons, and there is even a multiplication (by a constant) that must be done at the beginning. One might hope for a "uniform" algorithm梠ne that does not test the signs of the arguments and do different things depending on the outcome. Such a uniform algorithm, however, probably does not exist for base -2 (or for two's-complement arithmetic). The reason for this is that division is inherently a non-uniform process. Consider the simplest algorithm of the shift-and-subtract type. This algorithm would not shift at all, but for positive arguments would simply subtract the divisor from the dividend repeatedly, counting the number of subtractions performed until the remainder is less than the divisor. But if the dividend is negative (and the divisor is positive), the process is to add the divisor repeatedly until the remainder is 0 or positive, and the quotient is the negative of the count obtained. The process is still different if the divisor is negative.

In spite of this, division is a uniform process for the signed-magnitude representation of numbers. With such a representation, the magnitudes are positive, so the algorithm can simply subtract magnitudes and count until the remainder is negative, and then set the sign bit of the quotient to the exclusive or of the arguments, and the sign bit of the remainder equal to the sign of the dividend (this gives ordinary truncating division).

The algorithm given above could be made more uniform, in a sense, by first complementing the divisor, if it is negative, and then performing the steps given as simplified by having d > 0. Then a correction would be performed at the end. For modulus division, the correction is to negate the quotient and leave the remainder unchanged. This moves some of the tests out of the loop, but the algorithm as a whole is still not pretty.

It is interesting to contrast the commonly used number representations and base -2 regarding the question of whether or not the computer hardware treats numbers uniformly in carrying out the four fundamental arithmetic operations. We don't have a precise definition of "uniformly," but basically it means free of operations that might or might not be done, depending on the signs of the arguments. We consider setting the sign bit of the result equal to the exclusive or of the signs of the arguments to be a uniform operation. Table 12-2 shows which operations treat their operands uniformly with various number representations.

One's-complement addition and subtraction are done uniformly by means of the "end around carry" trick. For addition, all bits, including the sign bit, are added in the usual binary way, and the carry out of the leftmost bit (the sign bit) is added to the least significant position. This process always terminates right away (that is, the addition of the carry cannot generate another carry out of the sign bit position).

In the case of two's-complement multiplication, the entry is "yes" if only the right half of the doubleword product is desired.

We conclude this discussion of the base -2 number system with some observations about how to convert between straight binary and base -2.

To convert to binary from base -2, form a word that has only the bits with positive weight, and subtract a word that has only the bits with negative weight, using the subtraction rules of binary arithmetic. An alternative method that may be a little simpler is to extract the bits appearing in the negative weight positions, shift them one position to the left, and subtract the extracted number from the original number using the subtraction rules of ordinary binary arithmetic.

To convert to base -2 from binary, extract the bits appearing
in the odd positions (positions weighted by 2^{n}
with n odd), shift them one position to the
left, and add the two numbers using the addition rules of base -2. Here are two
examples:

牋牋 Binary from base -2牋牋牋牋牋牋 Base -2 from binary

牋牋牋110111 (-13)牋牋牋牋牋牋牋牋牋 110111 (55)

`牋? 1 0 1牋 (binary subtract)牋?+ 1 0 1牋 (base -2 add) `

`牋?--------牋牋牋牋牋牋牋牋牋牋?---------`

`...111110011 (-13)牋牋牋牋牋牋牋牋?1001011 (55) `

On a computer, with its fixed word size, these conversions work for negative numbers if the carries out of the high-order position are simply discarded. To illustrate, the example on the right above can be regarded as converting -9 to base -2 from binary if the word size is six bits.

The above algorithm for converting to base -2 cannot easily be implemented in software on a binary computer, because it requires doing addition in base -2. Schroeppel [HAK, item 128] overcomes this with a much more clever and useful way to do the conversions in both directions. To convert to binary, his method is

To see why this works, let the base -2 number consist of the four digits abcd. Then, interpreted (erroneously) in straight binary, this is 8a + 4b + 2c + d. After the exclusive or, interpreted in binary it is 8(1 - a) + 4b + 2(1 - c) + d. After the (binary) subtraction of 8 + 2, it is - 8a + 4b - 2c + d, which is its value interpreted in base -2.

Schroeppel's formula can be readily solved for N in terms of B, so it gives a three-instruction method for converting in the other direction. Collecting these results, we have the following formulas for converting to binary, for a 32-bit machine:

and the following, for converting to base -2 from binary: