Minus Zero

Most computers today use two's complement representation for negative integer numbers. The UNIVAC® 1100 family, however, used one's complement. In this page we'll look at the differences between the different representations and the most eccentric aspect of one's complement: minus zero, a number dearly cherished by clever 1100 series programmers. When discussing binary numbers, we'll use octal notation, as was the practice on the 1100 series. Each digit between 0 and 7 represents three binary bits, and hence a 36-bit word is written as 12 octal digits.

Signed Magnitude

The most obvious way to represent positive and negative numbers in a binary computer is signed magnitude, a direct analogue to how numbers are written in decimal notation. A sign bit is dedicated to indicating whether the number is positive or negative, with the rest of the bits giving the magnitude of the number. On almost all computers the most significant bit is used for the sign bit, zero signifying positive and one negative. (There's no reason you couldn't use some other bit as the sign or have one mean positive, but that would complicate the electronics and render impossible several programming tricks we'll get into later on.)

Let's consider, then, how we'd store the number 11, both positive and negative, on a 36-bit signed magnitude binary computer. The binary representation of 11 is 1011, or 013 in octal, so positive 11 becomes:

+11 = 000000000013

Since we're using the most significant (235) bit of the word to denote the sign, with one denoting negative, it is evident that the 36-bit signed magnitude representation of −11 is:

−11 = 400000000013

(Remember, octal digit 4 corresponds to bit pattern 100, so a leading 4 indicates the sign bit is set, informing us that the value in the rest of the word is negative.)

Signed magnitude is straightforward, easy to understand since it parallels the notation we're used to, and easy to decode by hand while debugging. So naturally, you'd expect there to be an excellent engineering reason why it shouldn't be used, and indeed there is. The fly lands in your Pepsi glass at the point where you move on from storing numbers to doing arithmetic with them. Consider: when the computer adds two numbers, with signed magnitude it first has to look at the signs of both numbers, then decide, based upon the signs, whether to add them or subtract one from the other, and what sign the result will bear. This doesn't seem like such a difficult problem today, when hardware is, by the standards of the 1950's when the 1100 series was designed, free, but when you put yourself in the place of designers who knew that each logic gate cost several dollars and, in the vacuum tube era, took up substantial space and gave off more heat than an entire computer does today, the need to simplify was compelling. Wouldn't it be great if the computer's arithmetic unit never needed to know if a number was positive or negative? This turns out to be possible, and led to the wide adoption of other representations of negative numbers, which we'll now examine.

(As hardware prices have plummeted, the relative advantages of various ways of doing arithmetic have become less significant. IEEE Std 754 floating point, used by virtually all contemporary computers, employs signed magnitude for negative numbers.)

Two's Complement

Designers of mechanical calculators confronted the problem of representing negative numbers decades before the first electronic computers. With only gears and levers at their disposal, simplicity was essential, and they developed an ingenious way to represent negative numbers called ten's complement. Suppose we have a four digit decimal calculator. The number 11 will then be represented as 0011. What if we want to put in −11? In ten's complement, if the number is negative we subtract its magnitude from the number one greater than our register size and enter the result. The largest number our four-digit calculator can handle is 9999; to get the ten's complement of −11, we compute 10000 − 11 = 9989.

The point of all this is now we have a way to compute with positive and negative numbers without ever worrying about their signs. To see how it works, let's add 0011 and 9989, the ten's complement of −11. Crank, grind, crunch, and our calculator prints 0000 on the tape. Wait a minute, you exclaim, when I add 0011 and 9989, I get 10000! That's right, but remember we're using a four digit calculator, so the carry into the fifth digit just disappears, leaving the result of 0000.

Since adding 0011 and the ten's complement of −11, 9989, yielded zero (as long as we forget about the carry, as the calculator does), we seem to have found a way that the calculator can work without worrying about the sign and, as a little experimentation will show, we have indeed. Nicer still, we can leave to the user whether the calculator is considered to work on positive numbers from 0 to 9999 or signed numbers in the range from −5000 to +4999; all it takes is a little “user interface”, a few more gears to convert numbers back and forth to ten's complement, and we're in business. The range of numbers looks a little odd, doesn't it? Let's see where that crept in. Taking the rules for forming the ten's complement, −1 becomes 9999, −2 9998, and so on, with the largest possible negative value being −5000, 5000. But since 5000 is the most negative number, we can't have positive numbers greater than 4999 because otherwise they'd overflow into the negative range. Irritating, but I suppose we'll learn to live with it.

More serious is discovering you can't divide a negative ten's complement number by 10 by shifting it right one place, as we do so easily with positive numbers. Consider 11: if we want to divide 0011 by 10, we just shift it to the right one place yielding the quotient of 0001 (the remainder is discarded). If you want to make a calculator multiply, this is extremely nice since you can rig the gear wheels to shift left and right and do everything with addition. But it doesn't work for negative numbers. Consider −11, which we represent as 9989. If we shift that right one position (understanding that we shift in 9 at the left when the number was negative to start with, in order to preserve the ten's complement of the top digit), we end up with 9998, which is −2. Bzzzzzt…wrong, we should have gotten −1, or 9999. The culprit in the divide-by-shifting caper turns out to be the same asymmetry around zero which caused us to end up with a negative number with no positive counterpart.

We've been discussing decimal computation so far. For binary computers there is a precise counterpart to ten's complement: two's complement. (The technique works in any number base; if you were computing in hexadecimal, you could use “sixteen's complement” for negative numbers.) Suppose that instead of a four digit decimal calculator we have a 12-bit binary computer. (I choose 12 bits since that's four octal digits). Taking the number 11 again, in binary that's 000000001011 or, in octal 0013. To form the two's complement for −11, we subtract the magnitude from binary 1000000000000 (octal 10000), which gives binary 111111110101, or octal 7765. Adding this back to 0013 yields 0000, confirming that the scheme works as well in base two as it did for ten.

Now that we've made the transition to binary, the problem with shifting two's complement numbers is even more nettlesome. Dividing by a power of two is something you do all the time in software for a binary computer and, while it's OK for positive numbers, you can't do it if the dividend might be negative. A compiler generating code for a FORTRAN program, for example, generally doesn't know when compiling:

        J = I / 256

whether I might contain a negative value. Not knowing, it has no alternative but to generate a divide by 256 rather than a shift by 8 bits which, on computers like the 1100 series, is about ten times faster.

Further, once in the binary world you start doing lots of logical operations on bits. But, you quickly discover, logical negation (changing all the ones into zeroes and vice versa) is not the same as calculating the two's complement for a negative number: the values will always differ by one. Since logical and arithmetic negation are both things you do all the time, that means you have to put in separate instructions for each—more expensive hardware, and the programmer has to be careful to choose the right one, depending on how the value in a word is being used…messy.

It was these shortcomings, especially apparent in a binary architecture, which motivated the engineers designing the 1100 series to look beyond two's complement for an even better solution. In the the next section we'll see what they found. If their choice seems odd, it's because in the intervening years the computer industry has pretty much settled on two's complement notation for negative integers, deciding, as it were, that the shortcomings we've discussed above can be lived with, that the merits of other notations are outweighed by disadvantages of their own as least as serious as the drawbacks of two's complement. So, when you examine the instruction set of a present day microprocessor such as the Intel x86, you'll find separate instructions for negating logical values (NOT) and integers (NEG) and, in many programming primers, an explanation of what each does and when to use which.

One's Complement

Since it appears most of the problems we encountered with ten's and two's complement are due to the asymmetry around zero, what if we eliminate it by making all the negative numbers one less? It turns out this is equivalent (for the decimal case) to subtracting each individual digit of the magnitude of the number from the number nine, yielding a nine's complement representation. Turning to the familiar case of 11, +11 is written as 0011 and to get −11, we subtract each digit from 9, resulting in 9988. Recalling that the two's complement for −11 was 9989, you'll see indeed we have moved all the negative values down and eliminated the asymmetry at zero.

It's also evident that since subtracting a decimal digit from 9 is its own inverse, we can invert the sign of any number, positive or negative, by taking its nine's complement. Subtracting each digit of 9988 gives us back 0011. So far, so good; this is looking better and better. Now let's try some arithmetic, for example adding −1 and 10. The one's complement of −1 is 9998, and adding that to 0010, the calculator prints 0008. Uh oh, wrong answer. Examining more closely shows that in nine's complement the carry we so blithely discarded in ten's complement now has to be accounted for. Adding 9998 and 0010 on a piece of paper rather than our busted calculator gives a sum of 10008, with a carry out of the fourth digit. To compute correctly in nine's complement, this carry has to be added back in to the least significant (rightmost) digit of the sum, a process UNIVAC engineers referred to as an “end-around carry”. Adding additional gears to our calculator to handle this carry adds the one carried out to the four low digits of 0008, and now the correct sum, 0009 appears on the tape. Excellent!

The problem of shifting negative numbers has also been fixed. Shifting −11 (9988) right one place (again, shifting in a nine if negative) yields 9998, for which the nine's complement (subtracting each digit from 9, remember) is 0001. In fact, this works in all circumstances. The oddity of a negative number with no positive counterpart is also gone; the nine's complement of the largest positive number, 4999 is 5000, which represents −4999 just as you'd expect.

From a hardware standpoint, the need for the end-around carry looks bothersome, especially when you consider that it might result in propagating a carry all the way back through a number you've just finished adding. On the other hand, the process of negating a number is simplified. Now we'll move on to binary to see how it works with bits instead of decimal digits.

As you'd expect, the binary counterpart of nine's complement is one's complement, and we form the one's complement of a binary number by subtracting each digit from 1. But with binary numbers, that is precisely the same as just replacing all the one bits with zeroes and vice versa! One's complement has eliminated the distinction between logical and arithmetic negation and the need for separate instructions for each operation.

In summary, by admitting the added complexity of end-around carry, we have obtained a way of representing negative numbers which is symmetric, in which power-of-two division can be done by shifting for all numbers, and where negating a number and inverting all its bits are one and the same thing. But we've also gotten something else as well:

Minus Zero!

If the one's complement of 1, binary 000000000001, is 111111111110, then what is the one's complement of zero, binary 000000000000? Well, of course that works out to be 111111111111: minus zero! Let's explore the consequences of this, shifting back to the decimal equivalent of nine's complement since that's easier to follow.

The nine's complement of 0000 is 9999, decimal minus zero. What happens, for example, when we add minus zero to, say, ten (0010)? The sum of 10 and 9999 is 10009, and performing the end-around carry gets us back to 0010, so minus zero is well behaved in addition and, it turns out, all other arithmetic operations as well. If we add +0 (0000) and −0 (9999) we get 9999, −0, which is still zero, so that's okay as well, if a bit odd at first glance. Maybe if we never start out with minus zero, we can ignore it? Unfortunately, no: consider the case of adding +11 (0011) and −11 (9988). We get zero, sure enough, but it's minus zero, 9999.

Now suppose we want to test whether a number is zero, something any program needs to do frequently. That seems to have gotten a bit sticky, since we've seen that minus zero can pop out of an innocuous calculation (due to the way the adder in the 1100 series operated, minus zero was generated under different circumstances than in this simplified example, but it was generated nonetheless). It appears that every time we want to test for zero, we have to see if it's +0 or −0: a real drag. We could always modify the hardware to do this automatically, so that all zero test instructions considered either +0 or −0 to be zero, and this is precisely what the UNIVAC 1100 series (and most other one's complement architectures) did. On a two's complement machine, there's one and only one zero made up of all zero bits, all ones denoting −1, so there is no problem testing for zero.

As Prof. William C. Lynch remarked in the heyday of minus zero, “give a programmer a glitch and he'll try to drive a truck through it”, and minus zero was a glitch big enough to roll a whole convoy through, good buddy. Consider the following UNIVAC assembly code (consult the instruction set reference if you're hazy on the op codes), and remember that UNIVAC 1100 test instructions skipped the next instruction in line if the condition was true.

        LA      A0,VALUE1             Load first number
        LA      A1,VALUE2             Load second number
        TNE     A0,A1                 Are they equal?
        J       ARE$EQUAL             Yes, they are
        ANA     A0,A1                 Not equal, huh?  Subtract them
        TNZ     A0                    Is the result zero?
        J       HOW$DID$THIS$HAPPEN   Huh?  Unequal but difference is zero ?

In order to be useful when comparing arbitrary bit patterns, the Test Equal (TE) and Test Not Equal (TNE) and related instructions only consider two values equal if they contain precisely the same bit pattern. Suppose VALUE1 is −0 (777777777777 octal) and VALUE2 is +0 (000000000000 octal). You can't get more different than that, can you? Every bit is different, so clearly these values are not equal. But when we subtract the second from the first, since we're subtracting zero from zero we end up with (as it happens, minus, see below for details) zero, and the Test Nonzero (TNZ) instruction, which considers both +0 and −0 to be zero, fails to skip since the difference in A0 is (minus) zero.

This example may seem a bit contrived, but consider the following trap which many a novice 1100 programmer stepped into, especially those who first learned on a two's complement machine. (Enclosing a value in parentheses made a reference to a memory cell containing that value. [Grognards: yes, I remember, and would have coded “,U”, but I don't want to explain that here]).

.       <Do some computation>
        TNE     A0,(0)                Was the result zero?
        J       GOT$ZERO              Yes.  We're done

If the calculation happens to end up with −0, gotcha!…that's not zero according to the bit-by-bit test performed by TNE. Another trap which frequently snared those used to two's complement was confusing “positive” and “negative”, which on a one's complement machine are properties shared by all numbers, including zero. The various instructions which tested positive and negative simply tested whether the most significant bit was zero (positive) or one (negative). This could lead to the puzzle:

        TP      VALUE                 Is it positive?
        J       IS$NEGATIVE           Nope, it's negative
        TNZ     VALUE                 Is the result zero?
        J       HOW$DID$THIS$HAPPEN   Huh?  Positive but equal to zero?

If you really needed to know if a value was greater than zero, you wrote instead:

        LA      A0,VALUE              Load value from memory
        TLE     A0,(0)                Less than or equal to zero?
        J       IS$POSITIVE           No, then it must be greater than zero

But even this had a little added twist: if you compared the two, +0 was greater than −0, leading to the conundrum:

        LA      A0,VALUE1             Load first number
        LA      A1,VALUE2             Load second number
        ANU     A1,A0                 Are they equal?  (Difference stored in A2)
        JNZ     A2,NOT$EQUAL          No, they're not
        TG      A0,A1                 Ahhhh, equal.  Is A1 > A0, perchance?
        J       YES$BUT$HOW           Easy, A1 = +0, A0 = -0.  Neat, huh?

I could go on and on. In practice, once you learned the fundamental tricks for testing numbers, you could ignore minus zero in most situations. The 1100 architecture helped by being based upon what was called a “subtractive adder”; the fundamental arithmetic operation was subtraction, not addition, which meant that subtracting a number from itself yielded +0, not −0, which greatly reduced the probability −0 would appear in typical computations. Still, any code which, for example, extracted bits from an integer by logical operations or shifting had to be wary of the possibility its “zero argument” might be, in fact, made up of all one bits. It was not uncommon, for example, to see code which wanted to extract the low-order 6 bits of what purported to be an unsigned integer do the following:

        LA      A0,VALUE              Load the value
        AA      A0,(0)                Add zero (hee hee hee)
        AND     A0,(077)              Extract low-order 6 bits

Add zero? Welcome to Minus Zero Logic (MZL) which, based on the fine details of the 1100 arithmetic unit, obeyed the following rules when adding zero and zero.

Minus Zero Logic

+0 + +0 = +0
+0 + −0 = +0
−0 + +0 = +0
−0 + −0 = −0

+0+0 = +0
+0−0 = +0
−0+0 = −0
−0−0 = +0

Adding zero guarantees that even if we started out with −0, we'll have +0 by the time of the AND, averting confusion between −0 and 63 in the least significant 6 bits.

This elementary introduction to the wonders of minus zero takes us only to the loading dock where inspired (and totally daft) UNIVAC programmers departed, applying minus zero to return additional status from subroutines (if the result is zero, an error has occurred; if minus zero, a really awful error), or observing that, assuming +0 means TRUE and −0 FALSE, the result of the subtraction:

        LA      A0,B                  Load B
        ANA     A0,A                  Subtract (Add Negative) A

gives, in register A0, the result of the logical implication (conditional) function with A as the antecedent and B the consequent.

Notes

  1. UNIVAC has been, over the years, a registered trademark of Eckert-Mauchly Computer Corporation, Remington Rand Corporation, Sperry Rand Corporation, Sperry Corporation, and Unisys Corporation.

References

Sperry Rand Corporation, UNIVAC 1107 Central Computer, UP-2463 Rev. 2. No date; this manual was obtained in 1967.

Sperry Rand Corporation, UNIVAC 1108 Processor and Storage, UP-4053 Rev. 1, 1966, 1970.

Knuth, Donald E. The Art of Computer Programming: Volume 2, Seminumerical Algorithms. Reading MA: Addison-Wesley. 1969.

Back to UNIVAC Memories


Compiled by John Walker
August 19, 1996