## About the Decimal/Two’s Complement Converter

This is a *decimal to two’s complement* converter and a *two’s complement to decimal* converter. These converters *do not* complement their input; that is, they do not negate it. They just convert it to or from two’s complement form. For example, -7 converts to 11111001 (to 8 bits), which is -7 in two’s complement. (Complementing it would make it 7, or 00000111 to 8 bits.) Similarly, 0011 converts to 3, not -3.

## How to Use the Decimal/Two’s Complement Converter

### Decimal to Two’s Complement

- Enter a positive or negative integer.
- Set the number of bits for the two’s complement representation (if different than the default).
- Click ‘Convert’ to convert.
- Click ‘Clear’ to reset the form and start from scratch.

If you want to convert another number, just type over the original number and click ‘Convert’ — there is no need to click ‘Clear’ first.

If the number you enter is too big to be represented in the requested number of bits, you will get an error message telling you so (it will tell you how many bits you need).

### Two’s Complement to Decimal

- Enter a two’s complement number — a string of 0s and 1s.
- Set the number of bits to match the length of the input (if different than the default).
- Click ‘Convert’ to convert.
- Click ‘Clear’ to reset the form and start from scratch.

The output will be a positive or negative decimal number.

## Exploring Properties of Two’s Complement Conversion

The best way to explore two’s complement conversion is to start out with a small number of bits. For example, let’s start with 4 bits, which can represent 16 decimal numbers, the range -8 to 7. Here’s what the decimal to two’s complement converter returns for these 16 values:

Decimal Number | Two’s Complement |
---|---|

-8 | 1000 |

-7 | 1001 |

-6 | 1010 |

-5 | 1011 |

-4 | 1100 |

-3 | 1101 |

-2 | 1110 |

-1 | 1111 |

0 | 0000 |

1 | 0001 |

2 | 0010 |

3 | 0011 |

4 | 0100 |

5 | 0101 |

6 | 0110 |

7 | 0111 |

Nonnegative integers always start with a ‘0’, and will have as many leading zeros as necessary to pad them out to the required number of bits. (If you strip the leading zeros, you’ll get the pure binary representation of the number.) Negative integers always start with a ‘1’.

If you run those two’s complement values through the two’s complement to decimal converter, you will confirm that the conversions are correct. Here is the same table, but listed in binary lexicographical order:

Two’s Complement | Decimal Number |
---|---|

0000 | 0 |

0001 | 1 |

0010 | 2 |

0011 | 3 |

0100 | 4 |

0101 | 5 |

0110 | 6 |

0111 | 7 |

1000 | -8 |

1001 | -7 |

1010 | -6 |

1011 | -5 |

1100 | -4 |

1101 | -3 |

1110 | -2 |

1111 | -1 |

No matter how many bits you use in your two’s complement representation, -1 decimal is always a string of 1s in binary.

## Converting Two’s Complement Fixed-Point to Decimal

You can use the two’s complement to decimal converter to convert numbers that are in fixed-point two’s complement notation. For example, if you have 16-bit numbers in Q7.8 format, enter the two’s complement value, and then just divide the decimal answer by 2^{8}. (Numbers in Q7.8 format range from -2^{15}/2^{8} = -128 to (2^{15}-1)/2^{8} = 127.99609375.) Here are some examples:

- 0101111101010101 converts to 24405, and 24405/2
^{8}= 95.33203125 - 1101010101110111 converts to -10889, and -10889/2
^{8}= -42.53515625

## Implementation

This converter is implemented in arbitrary-precision decimal arithmetic. Instead of operating on the binary representation of the inputs — in the usual “flip the bits and add 1” way — it does operations on the decimal representation of the inputs, adding or subtracting a power of two. Specifically, this is what’s done and when:

- Decimal to two’s complement
- Nonnegative input: Simply convert to binary and pad with leading 0s.
- Negative input (‘-’ sign): Add 2
^{numBits}, then convert to binary.

- Two’s complement to decimal
- Nonnegative input (leading ‘0’ bit): Simply convert to decimal.
- Negative input (leading ‘1’ bit): Convert to decimal, getting a positive number, then subtract 2
^{numBits}.

## Limits

For practical reasons, I’ve set an arbitrary limit of 512 bits on the inputs.